Bug 242839

Summary: zfs spa_condense_indirect_start_sync panic
Product: Base System Reporter: Jeremy Faulkner <gldisater>
Component: kernAssignee: freebsd-fs (Nobody) <fs>
Status: Closed Overcome By Events    
Severity: Affects Only Me CC: allanjude, sigsys
Priority: --- Keywords: crash
Version: 12.0-STABLE   
Hardware: amd64   
OS: Any   

Description Jeremy Faulkner 2019-12-23 18:35:43 UTC
zfs destroy'd a dataset that was on a pool that had previously had drives removed from it caused the system to panic. Now it is unable to boot while that pool is present. In single user mode it will panic if that pool is touched in any way by zpool/zfs commands.

constans% uname -a
FreeBSD constans 12.1-STABLE FreeBSD 12.1-STABLE #34 r355756M: Sat Dec 14 17:25:31 EST 2019     root@constans:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64

screenshot:
https://drive.google.com/open?id=1JrNjoP_SuiK1KOA9b9fMy9Yx0D1YWKhE

tar zcvf core-dump-spa-condense-indirect-start-sync.tar.gz /boot/kernel /var/crash /usr/lib/debug/boot/kernel
https://drive.google.com/open?id=1qfZin11byCG7p0W50adb5FH12BaY6Huf

<118># mount
<118>zroot/ROOT/12 on / (zfs, local, noatime, read-only, nfsv4acls)
<118>devfs on /dev (devfs, local, multilabel)
<118># zpool status mercury


Fatal trap 12: page fault while in kernel mode
cpuid = 7; apic id = 15
fault virtual address   = 0x40
fault code              = supervisor write data, page not present
instruction pointer     = 0x20:0xffffffff825b99ee
stack pointer           = 0x28:0xfffffe00e380d980
frame pointer           = 0x28:0xfffffe00e380d9a0
code segment            = base rx0, limit 0xfffff, type 0x1b
                        = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags        = interrupt enabled, resume, IOPL = 0
current process         = 26 (txg_thread_enter)
trap number             = 12
panic: page fault
cpuid = 7
time = 1577119472
KDB: stack backtrace:
#0 0xffffffff80c15ab7 at kdb_backtrace+0x67
#1 0xffffffff80bc932d at vpanic+0x19d
#2 0xffffffff80bc9183 at panic+0x43
#3 0xffffffff810a083c at trap_fatal+0x39c
#4 0xffffffff810a088f at trap_pfault+0x4f
#5 0xffffffff8109fec1 at trap+0x2a1
#6 0xffffffff810792fc at calltrap+0x8
#7 0xffffffff8258296a at spa_condense_indirect_start_sync+0x1fa
#8 0xffffffff82569307 at spa_sync+0x5b7
#9 0xffffffff82576ed8 at txg_sync_thread+0x238
#10 0xffffffff80b89cb2 at fork_exit+0x82
#11 0xffffffff8107a34e at fork_trampoline+0xe
Uptime: 2m28s
Dumping 2660 out of 73677 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%

__curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55
55              __asm("movq %%gs:%P1,%0" : "=r" (td) : "n" (offsetof(struct pcpu,
(kgdb)
Comment 1 Jeremy Faulkner 2019-12-29 17:23:57 UTC
I imaged the drives in order to test them with virtual machines and to prevent the host from touching them causing it to panic. 12-CURRENT VM was able to import the pool without issue.

FreeBSD freebsd-current 13.0-CURRENT FreeBSD 13.0-CURRENT #0 r356165: Sun Dec 29 10:18:37 EST 2019     root@freebsd-current:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64
Comment 2 Allan Jude freebsd_committer freebsd_triage 2021-06-10 17:29:14 UTC
Should this be closed now?