Bug 232738 - ZFS panics on device removal (solaris assert: cvd->vdev_ashift == spa->spa_max_ashift)
Summary: ZFS panics on device removal (solaris assert: cvd->vdev_ashift == spa->spa_ma...
Status: Closed Overcome By Events
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: CURRENT
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-fs (Nobody)
URL:
Keywords: crash, needs-qa
Depends on:
Blocks:
 
Reported: 2018-10-27 04:10 UTC by Jeremy Faulkner
Modified: 2021-06-22 11:46 UTC (History)
4 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Jeremy Faulkner 2018-10-27 04:10:16 UTC
Bhyve VM using sectorsize=512/4096 for the virtio devices 1, 2, and 3

FreeBSD 13.0-CURRENT r339718 GENERIC

Welcome to FreeBSD!

Release Notes, Errata: https://www.FreeBSD.org/releases/
Security Advisories:   https://www.FreeBSD.org/security/
FreeBSD Handbook:      https://www.FreeBSD.org/handbook/
FreeBSD FAQ:           https://www.FreeBSD.org/faq/
Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/
FreeBSD Forums:        https://forums.FreeBSD.org/

Documents installed with the system are in the /usr/local/share/doc/freebsd/
directory, or can be installed later with:  pkg install en-freebsd-doc
For other languages, replace "en" with a language code like de or fr.

Show the version of FreeBSD installed:  freebsd-version ; uname -a
Please include that output and any error messages when posting questions.
Introduction to manual pages:  man man
FreeBSD directory layout:      man hier

Edit /etc/motd to change this login announcement.
You have new mail.
root@devremoval:~ # zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          vtbd0p3   ONLINE       0     0     0

errors: No known data errors
root@devremoval:~ # zpool create devrem7 vtbd1 vtbd2
root@devremoval:~ # zpool attach devrem7 vtbd2 vtbd3
root@devremoval:~ # dd if=/dev/random of=/devrem7/file bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 45.626743 secs (45963219 bytes/sec)
root@devremoval:~ # zpool status devrem7
  pool: devrem7
 state: ONLINE
  scan: resilvered 188K in 0 days 00:00:00 with 0 errors on Sat Oct 27 00:02:53                                                      2018
config:

        NAME        STATE     READ WRITE CKSUM
        devrem7     ONLINE       0     0     0
          vtbd1     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            vtbd2   ONLINE       0     0     0
            vtbd3   ONLINE       0     0     0

errors: No known data errors
root@devremoval:~ # zpool detach devrem7 vtbd3
root@devremoval:~ # zpool status devrem7
  pool: devrem7
 state: ONLINE
  scan: resilvered 188K in 0 days 00:00:00 with 0 errors on Sat Oct 27 00:02:53                                                      2018
config:

        NAME        STATE     READ WRITE CKSUM
        devrem7     ONLINE       0     0     0
          vtbd1     ONLINE       0     0     0
          vtbd2     ONLINE       0     0     0

errors: No known data errors
root@devremoval:~ # zpool remove devrem7 vtbd2
panic: solaris assert: cvd->vdev_ashift == spa->spa_max_ashift (0xc == 0x9), fil                                                     e: /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev_removal.c, line:                                                      1919
cpuid = 0
time = 1540613111
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe004a528570
vpanic() at vpanic+0x1a3/frame 0xfffffe004a5285d0
panic() at panic+0x43/frame 0xfffffe004a528630
assfail3() at assfail3+0x2c/frame 0xfffffe004a528650
spa_vdev_remove_top_check() at spa_vdev_remove_top_check+0x1e1/frame 0xfffffe004                                                     a528690
spa_vdev_remove() at spa_vdev_remove+0x2b5/frame 0xfffffe004a528720
zfs_ioc_vdev_remove() at zfs_ioc_vdev_remove+0x47/frame 0xfffffe004a528750
zfsdev_ioctl() at zfsdev_ioctl+0x78b/frame 0xfffffe004a5287f0
devfs_ioctl() at devfs_ioctl+0xb2/frame 0xfffffe004a528840
VOP_IOCTL_APV() at VOP_IOCTL_APV+0x73/frame 0xfffffe004a528860
vn_ioctl() at vn_ioctl+0x124/frame 0xfffffe004a528970
devfs_ioctl_f() at devfs_ioctl_f+0x1f/frame 0xfffffe004a528990
kern_ioctl() at kern_ioctl+0x2ba/frame 0xfffffe004a5289f0
sys_ioctl() at sys_ioctl+0x15e/frame 0xfffffe004a528ac0
amd64_syscall() at amd64_syscall+0x278/frame 0xfffffe004a528bf0
fast_syscall_common() at fast_syscall_common+0x101/frame 0xfffffe004a528bf0
--- syscall (54, FreeBSD ELF64, sys_ioctl), rip = 0x8004aba1a, rsp = 0x7fffffffc                                                     0f8, rbp = 0x7fffffffc170 ---
KDB: enter: panic
[ thread pid 716 tid 100389 ]
Stopped at      kdb_enter+0x3b: movq    $0,kdb_why
db>
Comment 1 Allan Jude freebsd_committer 2018-11-24 20:04:15 UTC
In my testing I was only seeing this when I tried to remove a device that was recently added. If you export/import the pool before trying to remove it, it doesn't seem to have this issue.
Comment 2 Mark Linimon freebsd_committer freebsd_triage 2021-06-22 01:16:01 UTC
^Triage: reassign.

To submitter: is this still a known problem?  Many commits have been made to ZFS since this was submitted.
Comment 3 Jeremy Faulkner 2021-06-22 11:46:37 UTC
Not able to reproduce any longer on :

FreeBSD freebsd-test 14.0-CURRENT FreeBSD 14.0-CURRENT #0 main-n247405-8fa5c577de3: Thu Jun 17 08:12:04 UTC 2021     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64