In a bhyve guest having multiple BEs, "bectl activate -t testbe" would not boot the "testbe" BE on the next boot, but just the previous BE. It works as expected with the regular "bectl activate testbe" though. I've tested this on a guest loaded with bhyveload.
(In reply to Victor Sudakov from comment #0) Indeed, bhyve support is a little more complicated. Currently, only the BIOS loaders can do the bits needed for activate -t -- notably, UEFI cannot and userboot cannot. I do not have a solid idea of whether it would be painful to add the necessary support to userboot.
UEFI support for -t is even more important IMHO. More and more installations (including hypervisors) abandon BIOS for UEFI. If UEFI could do it - I could run FreeBSD guests in bhyve in UEFI mode.
Yes, under UEFI "activate -t" does not work either.
CC tsoome@, because I can't recall if his work covers userboot, too.
I have a similar issue but without bhyve. The system is running in a virtual machine (KVM under Arch GNU/Linux). I have the following BEs: 1. default ... releng/12.2 (upgraded from 12.0 -> releng/12.1 -> releng/12.2) 2. fbsd13t ... HEAD @ r364739 (before the OpenZFS import/switch) 3. head ... HEAD @ main-c255645-g74bd20769706 In BE 1 and 2 'bectl activate -t' (and zfsbootcfg) works. In BE 3 i have to run bectl without "-t" to activate the BE. Can anyone confirm this? I can test with a clean install (latest HEAD snapshot) later. Has this been addressed yet and does this bug occur with OpenZFS again?
1. Fresh AUTO-ZFS install from latest CURRENT ISO (FreeBSD-13.0-CURRENT-amd64-20210107-f2b794e1e90-255641-disc1.iso): 'bectl activate -t' works. 2. Fresh AUTO-ZFS install from latest STABLE ISO (FreeBSD-12.2-STABLE-amd64-20201231-r368787-disc1.iso): 'bectl activate -t' works. 3. After upgrading STABLE VM (from 2.) to HEAD (main-c255827-gdcdad299479e) 'bectl activate -t' no longer works: # bectl list BE Active Mountpoint Space Created default NR / 775M 2021-01-10 12:47 # bectl create test # bectl activate -t test Successfully activated boot environment test for next boot # bectl list BE Active Mountpoint Space Created default NR / 775M 2021-01-10 12:47 test T - 8K 2021-01-10 14:10 # shutdown -r now # bectl list BE Active Mountpoint Space Created default NR / 775M 2021-01-10 12:47 test - - 308K 2021-01-10 14:10 'bectl activate' works: # bectl activate test Successfully activated boot environment test # bectl list BE Active Mountpoint Space Created default N / 308K 2021-01-10 12:47 test R - 775M 2021-01-10 14:10 # shutdown -r now # bectl list BE Active Mountpoint Space Created default - - 588K 2021-01-10 12:47 test NR / 775M 2021-01-10 14:10
Sorry, I forgot to add: 'zpool upgrade -a' and 'gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 vtbd0' would resolve the issue. But then of course the old 12.2-STABLE VM is no longer bootable: Mounting from zfs:zroot/ROOT/default failed with error 45: retrying for 3 more seconds
(In reply to Herbert J. Skuhra from comment #7) Do not upgrade pool, just install gptzfsboot.
(In reply to Toomas Soome from comment #8) Thanks. After only installing gptzfsboot from HEAD: - Both BEs (12.2-STABLE and HEAD) are bootable - But now 'bectl activate -t' only works in the new BE (HEAD).
(In reply to Herbert J. Skuhra from comment #9) Yes, because we are using more flexible data structure (nvlist) in pool label with updated implementation. Unfortunately this does make jumping back and forth of old and new implementation a bit rocky, but can not really help about it.
^Triage: assign to mailing list.
^Triage: clear stale flags. To submitter: is this aging PR still relevant?