Bug 262461 - bsdinstall: After manual partitioning the disk, installer installs "ZFS on root" directly on the pool not under separate ZFS datasets
Summary: bsdinstall: After manual partitioning the disk, installer installs "ZFS on ro...
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: misc (show other bugs)
Version: CURRENT
Hardware: amd64 Any
: --- Affects Some People
Assignee: freebsd-bugs (Nobody)
URL:
Keywords: install
Depends on:
Blocks:
 
Reported: 2022-03-10 10:08 UTC by parv
Modified: 2023-07-29 03:07 UTC (History)
3 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description parv 2022-03-10 10:08:01 UTC
My issue with "Auto" mode of "ZFS on root" in the installer (at least of 13.0-RELEASE*) is that the pools takes over the whole disk, minus the space for EFI & swap. So before installing 14-CURRENT snapshot of 2022-03-03, I chose to manually partition the disk.

Installer -- of both snapshots -- installs "ZFS on root" directly on the pool after manual partitioning the disk. There are no separate ZFS datasets similar to what "Auto" method would have done.

I had also created a separate partition for "/var". Again, everything was installed on the pool; there were no separate datasets as is the case in "Auto" mode.

(Installer from 13-STABLE snapshot of the same vintage also resulted in the same issue.)

This results in not being able to use "bectl(8)" to manage boot environments ...

  # bectl list
  libbe_init("") failed.


"root" is the pool on 35 GB partition at /dev/nvd0p4 where base system had been installed directly using the 14-CURRENT snapshot. Disk & ZFS layout after install (layout & content of "alt_base", "home", "misc" pools have changed since but they do not matter for the issue)  ...

+zsh:9> gpart show /dev/nvd0
=>       40  976773088  nvd0  GPT  (466G)
         40     409600        - free -  (200M)
     409640   33554432     2  freebsd-swap  (16G)
   33964072   73400320     3  freebsd-zfs  (35G)
  107364392     532480     1  efi  (260M)
  107896872   72867840     4  freebsd-zfs  (35G)
  180764712   44040192     5  freebsd-zfs  (21G)
  224804904  524288000     6  freebsd-zfs  (250G)
  749092904  227680224     7  freebsd-zfs  (109G)

+zsh:9> gpart show -l /dev/nvd0
=>       40  976773088  nvd0  GPT  (466G)
         40     409600        - free -  (200M)
     409640   33554432     2  swap0  (16G)
   33964072   73400320     3  altbase0  (35G)
  107364392     532480     1  (null)  (260M)
  107896872   72867840     4  root0  (35G)
  180764712   44040192     5  varbase  (21G)
  224804904  524288000     6  home0  (250G)
  749092904  227680224     7  misc0  (109G)

+zsh:9> zfs list
NAME                   USED  AVAIL     REFER  MOUNTPOINT
alt_base               364K  33.4G       24K  /alt-root
alt_base/backup         24K  33.4G       24K  /alt-root/backup
home                  3.48G   237G       27K  /home
home/aa               18.9M   237G     18.9M  /home/aa
home/freebsd-ports    1.32G   237G     1.32G  /home/freebsd-ports
home/freebsd-src      1.75G   237G     1.75G  /home/freebsd-src
home/pkg-save          188M   237G      188M  /home/pkg-save
home/ports-distfiles   216M   237G      216M  /home/ports-distfiles
misc                   389M   104G      389M  /misc
misc/build-ports        24K   104G       24K  /misc/build-ports
misc/build-world        24K   104G       24K  /misc/build-world
misc/ccache             24K   104G       24K  /misc/ccache
misc/ccache-user        24K   104G       24K  /misc/ccache-user
root                  2.46G  31.0G     2.46G  none
temp                   475K   225G       24K  /temp
temp/log               115K   225G      115K  /temp/log
var                    663M  19.2G      662M  /var
var/crash               24K  19.2G       24K  /var/crash
var/log               72.5K  19.2G     72.5K  /var/log
var/mail                31K  19.2G       31K  /var/mail

+zsh:9> zpool list -v
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
alt_base   34.5G   364K  34.5G        -         -     0%     0%  1.00x    ONLINE  -
  nvd0p3   34.5G   364K  34.5G        -         -     0%  0.00%      -    ONLINE
home        248G  3.48G   245G        -         -     0%     1%  1.00x    ONLINE  -
  nvd0p6    248G  3.48G   245G        -         -     0%  1.40%      -    ONLINE
misc        108G   389M   108G        -         -     0%     0%  1.00x    ONLINE  -
  nvd0p7    108G   389M   108G        -         -     0%  0.35%      -    ONLINE
root       34.5G  2.46G  32.0G        -         -     0%     7%  1.00x    ONLINE  -
  nvd0p4   34.5G  2.46G  32.0G        -         -     0%  7.13%      -    ONLINE
temp        232G   475K   232G        -         -     0%     0%  1.00x    ONLINE  -
  da0       232G   475K   232G        -         -     0%  0.00%      -    ONLINE
var        20.5G   663M  19.9G        -         -     0%     3%  1.00x    ONLINE  -
  nvd0p5   20.5G   663M  19.9G        -         -     0%  3.15%      -    ONLINE
Comment 1 parv 2022-03-12 02:17:48 UTC
The settings of device or ZFS dataset to be booted from & root to be mounted are so deeply burried somewhere that after setting "bootfs" property on a new different pool ("remake") & listing its child dataset ("remake/newroot/base") as the value of "vfs.root.mountfrom" in "/boot/loader.conf", FreeBSD still boots from "root" pool. :-(
Comment 2 parv 2022-03-12 02:35:13 UTC
... forgot to mention that "/" was mounted from "root" pool as before.

Current set up ...

+ cat /boot/loader.conf
cryptodev_load="YES"
zfs_load="YES"

# For Xorg under UEFI on FreeBSD 14.
hw.syscons.disable=1

# To be able to set clock for individual cores, not just the whole CPU via
# Intel Speed Shift.
machdep.hwpstate_pkg_ctrl=0

# File system in RAM.
tmpfs_load="YES"

vfs.root.mountfrom="zfs:remake/newroot/base"

+ zpool list -o name,size,bootfs
NAME     SIZE  BOOTFS
build    108G  -
misc     248G  -
remake  34.5G  remake/newroot/base
root    34.5G  -
var     20.5G  -

+ zfs list -r -o name,avail,mounted,canmount,mountpoint root remake
NAME                      AVAIL  MOUNTED  CANMOUNT  MOUNTPOINT
remake                    30.8G  yes      on        /remake
remake/backup             30.8G  no       noauto    /remake/backup
remake/newroot            30.8G  no       on        none
remake/newroot/base       30.8G  yes      on        /newroot/base
remake/newroot/log        30.8G  yes      on        /newroot/log
remake/newroot/usr-local  30.8G  yes      on        /newroot/usr/local
root                      29.3G  yes      on        /oldroot

+ df -h /
Filesystem    Size    Used   Avail Capacity  Mounted on
root           33G    3.5G     29G    11%    /


+ mount -l -t zfs | egrep '(root|remake)'
root on / (zfs, local, noatime, nfsv4acls)
remake/newroot/log on /newroot/log (zfs, local, noatime, nfsv4acls)
remake/newroot/base on /newroot/base (zfs, local, noatime, nfsv4acls)
remake on /remake (zfs, local, noatime, nfsv4acls)
remake/newroot/usr-local on /newroot/usr/local (zfs, local, noatime, nfsv4acls)
Comment 3 parv 2022-03-15 01:45:57 UTC
For one reason or another I was able to reshape the datasets of "remake" pool (populated from "root" pool via "cp") into a more saner "ZFS on root" setup. I also got rid of "root" pool that was installed by the installer.

After moving "remake/newroot/usr-local" from "remake" pool to another pool without "bootfs" property & moving it again back to "remake" pool, "bectl" does not list as one of the boot environments anymore.

Also solved, I think, the mystery of a ZFS dataset being mounted at "/" even though assigned property is different: it was specified in {root,remake/newroot/base}:/etc/fstab. Removing it allowed "remake/newroot/base" dataset to be mounted at "/".


Even though I had found workarounds to get a saner "ZFS on root" setup from originally installed everything in "root" pool (no dataset hierarchy exists when installing in "Manual" mode on a particular partition), original issue persists.
Comment 4 Graham Perrin freebsd_committer freebsd_triage 2022-12-05 02:12:45 UTC
(In reply to parv from comment #0)

> … no separate ZFS datasets …

A comparable observation in bug 267843 comment 1, step 5. 

Whether there's a common cause, I can not tell. kevans@ maybe?

Triage: increased severity, given the observation.