Bug 186000 - zdb(8): zdb -S poolname no longer works
Summary: zdb(8): zdb -S poolname no longer works
Status: Closed Unable to Reproduce
Alias: None
Product: Base System
Classification: Unclassified
Component: bin (show other bugs)
Version: unspecified
Hardware: Any Any
: Normal Affects Only Me
Assignee: freebsd-bugs mailing list
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-01-22 15:40 UTC by Adam Stylinski
Modified: 2018-05-21 11:31 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Adam Stylinski 2014-01-22 15:40:00 UTC
The simulated histogram for the DDT doesn't seem to work:

[adam@nasbox ~]$ sudo zdb -S share
zdb: can't open 'share': No such file or directory

[adam@nasbox ~]$ zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
share  21.8T  10.3T  11.5T    47%  1.00x  ONLINE  -

No terribly interested in dedup but I thought this should be reported.

How-To-Repeat: sudo zdb -S poolname
Comment 1 Gert Doering 2015-11-27 10:50:47 UTC
I see this as well, and find it highly confusing.

This is on FreeBSD 10.1-RELEASE-p24 / amd64.

"zpool status" shows two pools.  "zdb -s" or "zdb -S" perfectly well work for one of them, but the other one (copy-paste'ed names) is reported as "No such file or directory":

$ zpool list
NAME          SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
MOEBIUS4       60G  2.26G  57.7G     1%         -     3%  1.00x  ONLINE  -
MOEBIUS4_FC  1016G   217G   799G     6%      256K    21%  1.00x  ONLINE  -

$ zdb -s MOEBIUS4   
                            capacity   operations   bandwidth  ---- errors ----
description                used avail  read write  read write  read write cksum
...
$ zdb -s MOEBIUS4_FC
zdb: can't open 'MOEBIUS4_FC': No such file or directory

(same thing for "-S")

MOEBIUS4 is the root pool, in case it makes a difference.
Comment 2 Gert Doering 2015-11-27 11:07:13 UTC
ok, turns out I could solve this on my own, so maybe the result can help either make the tools more useful, or just fix the issue.

Turns out that when this pool was created, it was put on /dev/multipath/fc_mpath (because there are two FC paths to the same LUN on a SAN).  I made a mistake when creating the multipath label, that is, I only "gmultipath create"'d it, not "gmultipath label" - it was in RAM, everything worked, after reboot, the label was not there and the pool was imported from /dev/da2.

And here comes's the catch: "zdb -s" still wanted to access /dev/multipath/fc_mpath, which, of course, did not exist.

After fixing the labeling and re-importing the pool it works now.

But it's still a bug - at least, it should point out *what* is "No such file or directory", or preferrably, just use the device that "zpool status" knows is used...
Comment 3 Eitan Adler freebsd_committer freebsd_triage 2018-05-20 23:50:55 UTC
For bugs matching the following conditions:
- Status == In Progress
- Assignee == "bugs@FreeBSD.org"
- Last Modified Year <= 2017

Do
- Set Status to "Open"
Comment 4 Gert Doering 2018-05-21 07:48:50 UTC
What do you mean by "Unable to Reproduce"?  This should be fairly easy to do - create a pool using a device label, export the pool, change the device label, import it again.  See my description on how I ended up there.
Comment 5 Andriy Gapon freebsd_committer 2018-05-21 09:00:03 UTC
(In reply to Gert Doering from comment #4)
I cannot reproduce the issue in comment #0 which makes no mention of multipath.
The issue that you report is not specific to '-s' option, it would affect all zdb usages for that pool.
If you are able to reproduce your issue on CURRENT or 11.2 BETA, please file a new bug.
Comment 6 Gert Doering 2018-05-21 09:12:12 UTC
Multipath is just the way I ended up at a pool that was created with a different device name than what it found on "zfs import" - I added this is as an explanation how I got into the same situation as the reporter in #0, who did not provide background.

If, as you say, all uses of zdb (not just -s/-S) get confused by a name change of the underlying device(s) - shouldn't that be *fixed*?

Not running 11.2-BETA or CURRENT anywhere, but I'll see if I can set up a VM to reproduce.
Comment 7 Gert Doering 2018-05-21 09:37:03 UTC
OK, tried with a 12.0-CURRENT snapshot as of last week, and could not(!) reproduce this anymore.

What I did was: create a ZFS pool on a separate hard disk, and moved that around - by removing a different hard disk (so da2 became da1), by adding and removing GPT labels (so /dev/gpt/zfs2 became /dev/da1p1).

In all cases, zdb -s and zdb -S worked perfectly well - so it seems I either hit some really strange corner case, or it already got fixed in the time since.

thanks a lot.
Comment 8 Andriy Gapon freebsd_committer 2018-05-21 11:31:31 UTC
(In reply to Gert Doering from comment #7)
ZFS has got some improvements at updating stored vdev paths over the years.
So, it must be that one of them helped with this problem too.
Thank you for testing!