The system has many ZFS disks (pools) mounted. Due to a mistake the system was rebooted when all but the root disk were not powered. On reboot there were messages like "ZFS pool pool_a" is not available, which is ok. However, when all disks were powered again the system now boots without them mounted. "zpool list" doesn't show these disks. One reboot with the disks not being powered causes them to disappear in the future. This doesn't look like a reasonable behavior.
zdb --config --cachefile=/etc/zfs/zpool.cache If details of the required pool are not within the cachefile, then I should not expect an automated import when the system enters multi-user mode. From <https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#the-etc-zfs-zpool-cache-file>: > … When a pool is not listed in the cache file it will need to be detected > and imported using the zpool import -d /dev/disk/by-id command. I'm unsure about that. Try a simple import (without option -d) then restart the system.
(In reply to Graham Perrin from comment #1) … I mean, seeking the configuration within zpool.cache might be a good first step towards diagnosis. Is it present in the file? Which version of FreeBSD, exactly? ---- Side note: on my 14.0-current machine I found an archaic (March 2022) /boot/zfs/zpool.cache that comprised the config for a previously used boot pool. Also present, and correct (true to what was recently imported): /etc/zfs/zpool.cache
(In reply to Graham Perrin from comment #1) (In reply to Graham Perrin from comment #1) 'import' worked. It is FreeBSD 13.2-STABLE. Thank you, Graham!
(In reply to Yuri Victorovich from comment #3) Thanks … I'm not entirely sure that it worked as intended. (I compared with a pool on a mobile hard disk drive. If I recall correctly: imported, OS shut down, USB disconnected, OS started, OS restarted, the nonavailable pool was still in the cachefile.) If symptoms recur, maybe reopen this report and we can aim to make things reproducible. Thanks