Bug 174310

Summary: [zfs] root point mounting broken on CURRENT with multiple pools
Product: Base System Reporter: Enji Cooper <ngie>
Component: kernAssignee: freebsd-fs (Nobody) <fs>
Status: Closed Feedback Timeout    
Severity: Affects Only Me CC: delphij
Priority: Normal    
Version: Unspecified   
Hardware: Any   
OS: Any   

Description Enji Cooper freebsd_committer freebsd_triage 2012-12-10 01:10:00 UTC
I have several zpools in a machine at work and I upgraded from 9.1-STABLE
to 10-CURRENT and I can no longer import the pools at boot (run into the
mountroot prompt). I am running off a git checkout that's a week or so
old, but there aren't any modifications made to sys/boot or sys/cddl/...
in my repository in the branch I built the kernel from.

I tried reverting several commits made by avg but I was unable to get my
system to boot.

The pool does not show up in the mountroot prompt and if I try to boot
from one of gjb's livecds later, it works if the pool is not imported via
the kernel first (otherwise zpool status incorrectly claims the pool is
faulted). More details are available here:

http://permalink.gmane.org/gmane.os.freebsd.current/146313

Will provide more details later (once I get back to work because my system
is currently unreachable), but this PR is being filed to track the problem.

How-To-Repeat: - Acquire a machine with an mph-based HBA.
- Hook up the spinning disk so it's detected as ada0, the mirrored SSDs
  as ada1 and ada2 respectively and the L2ARC SSD as ada3.
- Create a pool called root with a spinning disk and an SSD as an L2ARC.
- Create a pool called scratch with 2 SSDs mirrored with one another.
- Install 9.1-RC2 on root.
- Set the bootfs to root.
- Boot the system a few times to make sure it's sane.
- Upgrade to CURRENT as of December 5th.
- Try booting from gjb's livecd, run service hostid onestart, import root
  to /mnt, run zpool upgrade.
- Try booting again.
Comment 1 Mark Linimon freebsd_committer freebsd_triage 2012-12-10 01:13:55 UTC
Responsible Changed
From-To: freebsd-bugs->freebsd-fs

Over to maintainer(s).
Comment 2 Enji Cooper freebsd_committer freebsd_triage 2014-08-12 10:49:04 UTC
*** Bug 192183 has been marked as a duplicate of this bug. ***
Comment 3 Enji Cooper freebsd_committer freebsd_triage 2014-08-12 10:56:36 UTC
This issue is controller independent, because I ran into this problem with ada(4) in bug 192183. As I noted in the PR and thread, I think the issue lays between geom(4) and zfs(4) (the more information I get the more I think it's geom(4), as I created a non-standard sized freebsd-boot partition on my work machine as well).

My redoing the GPT table and zpool fixed my system so it boots once again, and I've been using ZFS since 8-CURRENT IIRC.

My old and new GPT table are like so:

Old:

GPT 128
1   freebsd-boot         34        128
2   freebsd-swap        162   50331648
3    freebsd-zfs   50331810 1903193325

New:

GPT 128
1   freebsd-boot         40         88
2   freebsd-swap        128   50331648
3    freebsd-zfs   50331776 1903193359

I'll work towards reproing the issue with geom, first.
Comment 4 Xin LI freebsd_committer freebsd_triage 2014-12-17 00:06:51 UTC
Since nobody seems to have been working on this, could you please add a few assertion or printf in /sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c, zfs_mount() to see what have returned these ENXIO's?
Comment 5 Eitan Adler freebsd_committer freebsd_triage 2018-05-28 19:50:37 UTC
batch change:

For bugs that match the following
-  Status Is In progress 
AND
- Untouched since 2018-01-01.
AND
- Affects Base System OR Documentation

DO:

Reset to open status.


Note:
I did a quick pass but if you are getting this email it might be worthwhile to double check to see if this bug ought to be closed.