If the boot zpool fails to mount early one, this is painful to diagnose, mostly yielding "Error 6" and not much else for a user to deal with. An off-by-default loader tunable that spits out device tasting and zpools it locates would help a lot. No particular reason for it being off-by-default other than on large multi-device pools it could slow down boot a lot if we print out all the things. Specific problem encountered was a zpool where there were traces of a previous zpool on /dev/da0 & /dev/da1 even though the current pool was /dev/da0p3 /devda1p3 both called zroot, and zfs 13.2-RELEASE loader could not decide which one to import. Displaying enumerating each label, pool guid, hostid, hostname, for each device found would be very very useful. zfs: /dev/da1 label2 pool_guid: 1234 hostid: 6788 hostname: 'example.org' zfs: /dev/da1p3 label0 pool_guid: 2345 hostid: 7890 hostname: 'example.net'
Another thing that might be nice: could the mountroot prompt support a command (ala the existing '.' and '?') to print the current kenv. This would also help with debugging.
The right place for this likely is in the lsfs command that we already have to list filesystems.. Or possibly in the list of available devices, though that may be tougher.... We underuse the introspection commands in the boot loader. It might also be better, if there's good technical reasons these suggestions won't work (and there may well be) to have a zfs info command that would report extra info like this to help in debugging. I've done this on a throw-a-way basis for the kboot stuff I've been working on, but it might make sense to regularize it. re kenv: all of it (it's quite long) or just the mounting (it already prints the env var that sets root).