Bug 212258 - bootpool is not imported after reboot on a MBR partitioned drive
Summary: bootpool is not imported after reboot on a MBR partitioned drive
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: bin (show other bugs)
Version: 11.0-RELEASE
Hardware: amd64 Any
: --- Affects Many People
Assignee: freebsd-bugs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-08-30 14:44 UTC by tsarya
Modified: 2021-05-19 03:22 UTC (History)
15 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description tsarya 2016-08-30 14:44:05 UTC
After clean install with ZFS root on a MBR partitioned drive, the "bootpool" becomes exported every time the system is rebooted. This creates an issue because kernel modules such as GELI do not load.

Also, if additional pools are created on other drives, they also become exported after reboot!

Issue is reproduced both on a native install as well as in a VM. (FreeBSD 11.0-RC1 and RC2 on VirtualBox)
Comment 1 Petr Fischer 2016-10-15 06:05:18 UTC
+1 on FreeBSD 11-RELEASE, also MBR partition scheme, bootpool is also exported every reboot.

There is also another user with this problem:
https://forums.freebsd.org/threads/42980/

So, bug reporter can change importance.
Comment 2 Allan Jude freebsd_committer freebsd_triage 2016-10-16 21:14:31 UTC
Your zpool.cache file is out of sync

for both pools do:

zpool set cachefile=/boot/zfs/zpool.cache

What are the contents of your loader.conf?
Comment 3 tsarya 2016-10-17 08:02:02 UTC
This is what I get after a clean install using MBR:

# zpool import
   pool: bootpool
     id: 942740621413601929
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        bootpool    ONLINE
          ada0s1a   ONLINE

Contents of the loader.conf:
# cat /boot/loader.conf
vfs.root.mountfrom="zfs:zroot/ROOT/default"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
zfs_load="YES"

After importing the bootpool, I did as you mentioned:
# zpool set cachefile=/boot/zfs/zpool.cache zroot
# zpool set cachefile=/boot/zfs/zpool.cache bootpool
# reboot

Same thing, bootpool remains exported after reboot
# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot  11.9G   280M  11.7G         -     1%     2%  1.00x  ONLINE  -
Comment 4 Petr Fischer 2016-10-17 14:23:12 UTC
I also checked zpool.cache by:

zdb -CU /boot/zfs/zpool.cache

Everything OK, correct devices inside cache file, bootpool still exported after reboot.
Comment 5 sschwarz 2016-10-28 00:46:17 UTC
Likewise, a basic guided install of 11 on MBR root+ZFS yields an exported bootpool:

# uname -a
FreeBSD 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016     root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

root@xxxxxxx:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da4s1d  ONLINE       0     0     0
	    da5s1d  ONLINE       0     0     0
	    da6s1d  ONLINE       0     0     0
	    da7s1d  ONLINE       0     0     0

errors: No known data errors
root@xxxxxxx:~ # zpool import
   pool: bootpool
     id: 1792675910293467778
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	bootpool    ONLINE
	  mirror-0  ONLINE
	    da4s1a  ONLINE
	    da5s1a  ONLINE
	    da6s1a  ONLINE
	    da7s1a  ONLINE

root@xxxxxxx:~ # ls -l /boot
lrwxr-xr-x  1 root  wheel  13 Sep 28 18:47 /boot -> bootpool/boot
root@xxxxxxx:~ # ls -l /bootpool/
total 0
root@xxxxxxx:~ #
Comment 6 Dan Bright 2017-03-01 23:01:45 UTC
Issue persists when using MBR as described below, on 11.0-RELEASE-p8.
Comment 7 Leandro 2017-06-04 14:59:27 UTC
Issue persist using ZFS (MBR) default installation without encryption.

FreeBSD-11.0-RELEASE-p9
Comment 8 Gautam Mani 2017-08-23 03:35:23 UTC
Issue also exists with 11.1-RELEASE fresh install on a laptop with single disk - using ZFS and MBR partitioning without encryption. There are many side-effects due to this issue:

1) wifi doesnt work - due to firmware not being able to be loaded - I am using iwn in this case
2) tmpfs module is unable to be loaded in case you use that
3) I cannot use another wifi adapter - the DLINK device was a run

There might be some more - but exporting the bootpool should not be the default from the installer. 

I followed the guidance here:
https://forums.freebsd.org/threads/42980/#post-239065

The entries that need to be added to loader.conf are:
zpool_cache_load="YES"
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"

as a workaround.
Comment 9 Andriy Gapon freebsd_committer freebsd_triage 2017-08-23 06:46:24 UTC
I would like to clarify something.  I think that the setup reported here is not the basic setup in the true meaning of the word.  The basic setup would have only one pool.  The reports describe a two pool configuration with zroot and bootpool.  That must be because some sort of a disk encryption option is enabled.
Comment 10 Gautam Mani 2017-08-23 07:37:33 UTC
No. FWIW, I assure you my case is not. If you need something specific from the system as evidence please ask. I can send it later when I am able to access it.

As I indicated, mine was a default installation of FreeBSD using full ZFS adn MBR partitioning.
Comment 11 Gautam Mani 2017-08-23 07:53:57 UTC
And from brief checking looks to me that it clearly doesnt have anything to do with encryption. 

From:
https://github.com/freebsd/freebsd/blob/stable/11/usr.sbin/bsdinstall/scripts/zfsboot

lines:83
#
# Create a separate boot pool?
# NB: Automatically set when using geli(8) or MBR
#
: ${ZFSBOOT_BOOT_POOL=}

And line: 975 in the MBR selection case:
#
		# Always prepare a boot pool on MBR
		# Do not align this partition, there must not be a gap
		#
		ZFSBOOT_BOOT_POOL=1

So seems that bootpool is created in the MBR case irrespective of the encryption. But I need to help dig some more.
Comment 12 Andriy Gapon freebsd_committer freebsd_triage 2017-08-23 12:09:17 UTC
(In reply to Gautam Mani from comment #11)
I see.  I was not aware that the MBR installation used a two-pool solution.  Probably, that's to work around some limitations of the MBR scheme.
Comment 13 Gautam Mani 2017-08-24 04:12:47 UTC
I am not a ZFS expert - just started using it a few days back, but here is the zpool history and I am showing the commands run by the installer.

History for 'bootpool':
2017-08-22.15:08:09 zpool create -o altroot=/mnt -m /bootpool -f bootpool ada0s1a
2017-08-22.15:08:14 zpool export bootpool
2017-08-22.15:09:28 zpool import -o altroot=/mnt -N bootpool

Below these are manually done.
2017-08-22.10:36:22 zpool import -f bootpool
...

History for 'zroot':
2017-08-22.15:08:10 zpool create -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot ada0s1d
2017-08-22.15:08:10 zfs create -o mountpoint=none zroot/ROOT
2017-08-22.15:08:10 zfs create -o mountpoint=/ zroot/ROOT/default
2017-08-22.15:08:10 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
2017-08-22.15:08:11 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2017-08-22.15:08:11 zfs create zroot/usr/home
2017-08-22.15:08:11 zfs create -o setuid=off zroot/usr/ports
2017-08-22.15:08:11 zfs create zroot/usr/src
2017-08-22.15:08:11 zfs create -o mountpoint=/var -o canmount=off zroot/var
2017-08-22.15:08:12 zfs create -o exec=off -o setuid=off zroot/var/audit
2017-08-22.15:08:12 zfs create -o exec=off -o setuid=off zroot/var/crash
2017-08-22.15:08:12 zfs create -o exec=off -o setuid=off zroot/var/log
2017-08-22.15:08:13 zfs create -o atime=on zroot/var/mail
2017-08-22.15:08:13 zfs create -o setuid=off zroot/var/tmp
2017-08-22.15:08:13 zfs set mountpoint=/zroot zroot
2017-08-22.15:08:14 zpool set bootfs=zroot/ROOT/default zroot
2017-08-22.15:08:14 zpool export zroot
2017-08-22.15:08:57 zpool import -o altroot=/mnt zroot
2017-08-22.15:09:28 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2017-08-22.15:09:33 zfs set canmount=noauto zroot/ROOT/default
...
Comment 14 Daniel Eischen freebsd_committer freebsd_triage 2017-09-04 21:32:15 UTC
This is still a problem on:

FreeBSD fooname 11.1-STABLE FreeBSD 11.1-STABLE #0 r322788: Tue Aug 22 15:32:10 UTC 2017     root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

MBR, 2 zfs partions as installed by default ZFS MBR setup on 11.1 stable snapshot.

$ gpart show ada0
=>       63  488390562  ada0  MBR  (233G)
         63          1        - free -  (512B)
         64  488390560     1  freebsd  [active]  (233G)
  488390624          1        - free -  (512B)

$ gpart show ada0s1
=>        0  488390560  ada0s1  BSD  (233G)
          0    4194304       1  freebsd-zfs  (2.0G)
    4194304    4194304       2  freebsd-swap  (2.0G)
    8388608  480001944       4  freebsd-zfs  (229G)
  488390552          8          - free -  (4.0K)
Comment 15 ncrogers 2019-02-15 22:05:26 UTC
FWIW I ran into this on a 12.0-RELEASE install using MBR.
Comment 16 James B. Byrne 2019-05-24 16:25:57 UTC
This has happened today with a fresh install of RELEASE 12.0 to clean hdds.  

gptzfsboot: No ZFS pools located, can't boot

Zpool is configured with encryption but we do not get that far.  I previously ran into this on another host when we upgraded from 11 to 12.   The workaround there was to logically link /boot to /bootpool/boot.

These entries are in loader.conf

zpool_cache_load="YES"
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"

But these have no effect since /boot cannot be read.
Comment 17 Allan Jude freebsd_committer freebsd_triage 2019-05-26 02:17:32 UTC
(In reply to James B. Byrne from comment #16)
James, Your issue seems totally unrelated to the rest of this PR.

I am guessing your issue is that you do not have the 'GELI Boot' flag set on your encrypted partitions, so you are not being prompted to decrypt them in the loader.

Try: geli configure -g /dev/disk-partition-here
Comment 18 David Christensen 2020-06-03 06:25:34 UTC
Same problem on FreeBSD 12.1 RELEASE amd64 on VirtualBox 6.1 on Debian 9.12:

https://lists.freebsd.org/pipermail/freebsd-questions/2020-June/289793.html
Comment 19 David Christensen 2020-06-04 03:42:30 UTC
(In reply to David Christensen from comment #18)

Problem persists after upgrading system:

2020-06-03 20:34:29 toor@vf1 ~
# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
vf1_zroot  3.75G   690M  3.08G        -         -     3%    17%  1.00x  ONLINE  -

2020-06-03 20:34:31 toor@vf1 ~
# freebsd-version && uname -a
12.1-RELEASE-p5
FreeBSD vf1.tracy.holgerdanske.com 12.1-RELEASE-p5 FreeBSD 12.1-RELEASE-p5 GENERIC  amd64

2020-06-03 20:36:54 toor@vf1 ~
# gpart show ada0 ada0s1
=>      63  14673585  ada0  MBR  (7.0G)
        63         1        - free -  (512B)
        64  14673584     1  freebsd  [active]  (7.0G)

=>       0  14673584  ada0s1  BSD  (7.0G)
         0   4194304       1  freebsd-zfs  (2.0G)
   4194304   2097152       2  freebsd-swap  (1.0G)
   6291456   8382128       4  freebsd-zfs  (4.0G)

2020-06-03 20:37:32 toor@vf1 ~
# zpool import
   pool: bootpool
     id: 13757577973895316021
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	bootpool    ONLINE
	  ada0s1a   ONLINE

2020-06-03 20:37:38 toor@vf1 ~
# zpool import -f bootpool

2020-06-03 20:38:02 toor@vf1 ~
# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
bootpool   1.88G   209M  1.67G        -         -     0%    10%  1.00x  ONLINE  -
vf1_zroot  3.75G   690M  3.08G        -         -     4%    17%  1.00x  ONLINE  -

2020-06-03 20:38:14 toor@vf1 ~
# zpool status
  pool: bootpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	bootpool    ONLINE       0     0     0
	  ada0s1a   ONLINE       0     0     0

errors: No known data errors

  pool: vf1_zroot
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	vf1_zroot   ONLINE       0     0     0
	  ada0s1d   ONLINE       0     0     0

errors: No known data errors