After clean install with ZFS root on a MBR partitioned drive, the "bootpool" becomes exported every time the system is rebooted. This creates an issue because kernel modules such as GELI do not load.
Also, if additional pools are created on other drives, they also become exported after reboot!
Issue is reproduced both on a native install as well as in a VM. (FreeBSD 11.0-RC1 and RC2 on VirtualBox)
+1 on FreeBSD 11-RELEASE, also MBR partition scheme, bootpool is also exported every reboot.
There is also another user with this problem:
So, bug reporter can change importance.
Your zpool.cache file is out of sync
for both pools do:
zpool set cachefile=/boot/zfs/zpool.cache
What are the contents of your loader.conf?
This is what I get after a clean install using MBR:
# zpool import
action: The pool can be imported using its name or numeric identifier.
Contents of the loader.conf:
# cat /boot/loader.conf
After importing the bootpool, I did as you mentioned:
# zpool set cachefile=/boot/zfs/zpool.cache zroot
# zpool set cachefile=/boot/zfs/zpool.cache bootpool
Same thing, bootpool remains exported after reboot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 11.9G 280M 11.7G - 1% 2% 1.00x ONLINE -
I also checked zpool.cache by:
zdb -CU /boot/zfs/zpool.cache
Everything OK, correct devices inside cache file, bootpool still exported after reboot.
Likewise, a basic guided install of 11 on MBR root+ZFS yields an exported bootpool:
# uname -a
FreeBSD 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016 email@example.com:/usr/obj/usr/src/sys/GENERIC amd64
root@xxxxxxx:~ # zpool status
scan: none requested
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da4s1d ONLINE 0 0 0
da5s1d ONLINE 0 0 0
da6s1d ONLINE 0 0 0
da7s1d ONLINE 0 0 0
errors: No known data errors
root@xxxxxxx:~ # zpool import
action: The pool can be imported using its name or numeric identifier.
root@xxxxxxx:~ # ls -l /boot
lrwxr-xr-x 1 root wheel 13 Sep 28 18:47 /boot -> bootpool/boot
root@xxxxxxx:~ # ls -l /bootpool/
Issue persists when using MBR as described below, on 11.0-RELEASE-p8.
Issue persist using ZFS (MBR) default installation without encryption.
Issue also exists with 11.1-RELEASE fresh install on a laptop with single disk - using ZFS and MBR partitioning without encryption. There are many side-effects due to this issue:
1) wifi doesnt work - due to firmware not being able to be loaded - I am using iwn in this case
2) tmpfs module is unable to be loaded in case you use that
3) I cannot use another wifi adapter - the DLINK device was a run
There might be some more - but exporting the bootpool should not be the default from the installer.
I followed the guidance here:
The entries that need to be added to loader.conf are:
as a workaround.
I would like to clarify something. I think that the setup reported here is not the basic setup in the true meaning of the word. The basic setup would have only one pool. The reports describe a two pool configuration with zroot and bootpool. That must be because some sort of a disk encryption option is enabled.
No. FWIW, I assure you my case is not. If you need something specific from the system as evidence please ask. I can send it later when I am able to access it.
As I indicated, mine was a default installation of FreeBSD using full ZFS adn MBR partitioning.
And from brief checking looks to me that it clearly doesnt have anything to do with encryption.
# Create a separate boot pool?
# NB: Automatically set when using geli(8) or MBR
And line: 975 in the MBR selection case:
# Always prepare a boot pool on MBR
# Do not align this partition, there must not be a gap
So seems that bootpool is created in the MBR case irrespective of the encryption. But I need to help dig some more.
(In reply to Gautam Mani from comment #11)
I see. I was not aware that the MBR installation used a two-pool solution. Probably, that's to work around some limitations of the MBR scheme.
I am not a ZFS expert - just started using it a few days back, but here is the zpool history and I am showing the commands run by the installer.
History for 'bootpool':
2017-08-22.15:08:09 zpool create -o altroot=/mnt -m /bootpool -f bootpool ada0s1a
2017-08-22.15:08:14 zpool export bootpool
2017-08-22.15:09:28 zpool import -o altroot=/mnt -N bootpool
Below these are manually done.
2017-08-22.10:36:22 zpool import -f bootpool
History for 'zroot':
2017-08-22.15:08:10 zpool create -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot ada0s1d
2017-08-22.15:08:10 zfs create -o mountpoint=none zroot/ROOT
2017-08-22.15:08:10 zfs create -o mountpoint=/ zroot/ROOT/default
2017-08-22.15:08:10 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
2017-08-22.15:08:11 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2017-08-22.15:08:11 zfs create zroot/usr/home
2017-08-22.15:08:11 zfs create -o setuid=off zroot/usr/ports
2017-08-22.15:08:11 zfs create zroot/usr/src
2017-08-22.15:08:11 zfs create -o mountpoint=/var -o canmount=off zroot/var
2017-08-22.15:08:12 zfs create -o exec=off -o setuid=off zroot/var/audit
2017-08-22.15:08:12 zfs create -o exec=off -o setuid=off zroot/var/crash
2017-08-22.15:08:12 zfs create -o exec=off -o setuid=off zroot/var/log
2017-08-22.15:08:13 zfs create -o atime=on zroot/var/mail
2017-08-22.15:08:13 zfs create -o setuid=off zroot/var/tmp
2017-08-22.15:08:13 zfs set mountpoint=/zroot zroot
2017-08-22.15:08:14 zpool set bootfs=zroot/ROOT/default zroot
2017-08-22.15:08:14 zpool export zroot
2017-08-22.15:08:57 zpool import -o altroot=/mnt zroot
2017-08-22.15:09:28 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2017-08-22.15:09:33 zfs set canmount=noauto zroot/ROOT/default
This is still a problem on:
FreeBSD fooname 11.1-STABLE FreeBSD 11.1-STABLE #0 r322788: Tue Aug 22 15:32:10 UTC 2017 firstname.lastname@example.org:/usr/obj/usr/src/sys/GENERIC amd64
MBR, 2 zfs partions as installed by default ZFS MBR setup on 11.1 stable snapshot.
$ gpart show ada0
=> 63 488390562 ada0 MBR (233G)
63 1 - free - (512B)
64 488390560 1 freebsd [active] (233G)
488390624 1 - free - (512B)
$ gpart show ada0s1
=> 0 488390560 ada0s1 BSD (233G)
0 4194304 1 freebsd-zfs (2.0G)
4194304 4194304 2 freebsd-swap (2.0G)
8388608 480001944 4 freebsd-zfs (229G)
488390552 8 - free - (4.0K)
FWIW I ran into this on a 12.0-RELEASE install using MBR.
This has happened today with a fresh install of RELEASE 12.0 to clean hdds.
gptzfsboot: No ZFS pools located, can't boot
Zpool is configured with encryption but we do not get that far. I previously ran into this on another host when we upgraded from 11 to 12. The workaround there was to logically link /boot to /bootpool/boot.
These entries are in loader.conf
But these have no effect since /boot cannot be read.
(In reply to James B. Byrne from comment #16)
James, Your issue seems totally unrelated to the rest of this PR.
I am guessing your issue is that you do not have the 'GELI Boot' flag set on your encrypted partitions, so you are not being prompted to decrypt them in the loader.
Try: geli configure -g /dev/disk-partition-here