Bug 280500 - raidz expansion causes panic
Summary: raidz expansion causes panic
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 15.0-CURRENT
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-fs (Nobody)
URL:
Keywords: crash
Depends on:
Blocks:
 
Reported: 2024-07-29 19:35 UTC by Jeremy Faulkner
Modified: 2025-01-10 15:11 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Jeremy Faulkner 2024-07-29 19:35:25 UTC
root@freebsd-current:~ # zpool status
  pool: raid-expand-test
 state: ONLINE
config:

        NAME              STATE     READ WRITE CKSUM
        raid-expand-test  ONLINE       0     0     0
          raidz1-0        ONLINE       0     0     0
            vtbd1         ONLINE       0     0     0
            vtbd2         ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:07:33 with 0 errors on Fri Jun 17 22:46:48 2022
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          vtbd0p3   ONLINE       0     0     0

errors: No known data errors
root@freebsd-current:~ # zpool attach raid-expand-test raidz1-0 vtbd3


panic: VERIFY(vd == vd->vdev_top) failed

cpuid = 2
time = 1722275867
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe008ebd8800
vpanic() at vpanic+0x13f/frame 0xfffffe008ebd8930
spl_panic() at spl_panic+0x3a/frame 0xfffffe008ebd8990
zio_vdev_io_start() at zio_vdev_io_start+0x637/frame 0xfffffe008ebd89e0
zio_nowait() at zio_nowait+0x10c/frame 0xfffffe008ebd8a20
vdev_check_boot_reserve() at vdev_check_boot_reserve+0x7a/frame 0xfffffe008ebd8a50
spa_vdev_attach() at spa_vdev_attach+0x700/frame 0xfffffe008ebd8ad0
zfs_ioc_vdev_attach() at zfs_ioc_vdev_attach+0x75/frame 0xfffffe008ebd8b10
zfsdev_ioctl_common() at zfsdev_ioctl_common+0x4f4/frame 0xfffffe008ebd8bd0
zfsdev_ioctl() at zfsdev_ioctl+0xfb/frame 0xfffffe008ebd8c00
devfs_ioctl() at devfs_ioctl+0xd1/frame 0xfffffe008ebd8c50
vn_ioctl() at vn_ioctl+0xbc/frame 0xfffffe008ebd8cc0
devfs_ioctl_f() at devfs_ioctl_f+0x1e/frame 0xfffffe008ebd8ce0
kern_ioctl() at kern_ioctl+0x286/frame 0xfffffe008ebd8d40
sys_ioctl() at sys_ioctl+0x12d/frame 0xfffffe008ebd8e00
amd64_syscall() at amd64_syscall+0x158/frame 0xfffffe008ebd8f30
fast_syscall_common() at fast_syscall_common+0xf8/frame 0xfffffe008ebd8f30
--- syscall (54, FreeBSD ELF64, ioctl), rip = 0x2695277508fa, rsp = 0x26951ea240f8, rbp = 0x26951ea24160 ---
KDB: enter: panic
[ thread pid 1038 tid 100448 ]
Stopped at      kdb_enter+0x33: movq    $0,0x1058162(%rip)
db>
Comment 1 Mark Johnston freebsd_committer freebsd_triage 2024-08-09 18:21:10 UTC
How was the root pool created?  Are you running this in a prebuilt root-on-ZFS VM image by any chance?
Comment 2 Jeremy Faulkner 2024-08-12 11:01:02 UTC
zpool create raid-expand-test raidz vtbd1 vtbd2


It also paniced a system with a 5 drive raidz on real hardware. It is not a prebuilt VM image.
Comment 3 Mark Johnston freebsd_committer freebsd_triage 2025-01-10 15:11:53 UTC
Should be fixed by https://github.com/openzfs/zfs/pull/16942