Bug 203906 - ZFS lockup, spa_namespace_lock
Summary: ZFS lockup, spa_namespace_lock
Status: Closed Overcome By Events
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.1-RELEASE
Hardware: amd64 Any
: --- Affects Some People
Assignee: freebsd-fs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-10-20 23:15 UTC by Adrian Palmer
Modified: 2023-04-22 09:06 UTC (History)
6 users (show)

See Also:


Attachments
procstat -kk -a output (61.43 KB, application/gzip)
2018-04-06 08:36 UTC, emz
no flags Details
procstat output (4.57 KB, application/x-xz)
2018-05-20 17:54 UTC, Alexandre Kovalenko
no flags Details
Lock owner (970.38 KB, image/jpeg)
2018-05-20 18:13 UTC, Alexandre Kovalenko
no flags Details
Backtrace (part 1) (810.89 KB, image/jpeg)
2018-05-20 18:15 UTC, Alexandre Kovalenko
no flags Details
Backtrace (part 2) (794.22 KB, image/jpeg)
2018-05-20 18:15 UTC, Alexandre Kovalenko
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Adrian Palmer 2015-10-20 23:15:55 UTC
zpool/zfs utilities hang when referencing a now, non-existant volume that has not cleaned itself up in the kernel.  Makes zfs administration useless.  Cannot recover failed volume till reboot.

$ uname -a
FreeBSD Vortex 10.1-RELEASE FreeBSD 10.1-RELEASE #0 r274401: Tue Nov 11 21:02:49 UTC 2014     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

$ bzcat messages.*.bz2 | grep -rn ATA  
... //trimmed
1065:Sep 25 21:12:09 Vortex kernel: ada4: <ST31000333AS CC3H> ATA-8 SATA 2.x device
1067:Sep 25 21:12:09 Vortex kernel: ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
1073:Sep 25 21:12:30 Vortex kernel: (aprobe0:siisch0:0:0:0): ATA_IDENTIFY. ACB: ec 00 00 00 00 40 00 00 00 00 00 00
1086:Sep 25 21:13:00 Vortex kernel: (aprobe0:siisch0:0:0:0): ATA_IDENTIFY. ACB: ec 00 00 00 00 40 00 00 00 00 00 00
1091:Sep 25 21:13:30 Vortex kernel: (aprobe0:siisch0:0:0:0): ATA_IDENTIFY. ACB: ec 00 00 00 00 40 00 00 00 00 00 00
1095:Sep 25 21:13:45 Vortex kernel: (aprobe0:siisch0:0:0:0): CAM status: ATA Status Error
1096:Sep 25 21:13:45 Vortex kernel: (aprobe0:siisch0:0:0:0): ATA status: 00 ()
1100:Sep 25 21:14:01 Vortex kernel: (aprobe0:siisch0:0:0:0): CAM status: ATA Status Error
1101:Sep 25 21:14:01 Vortex kernel: (aprobe0:siisch0:0:0:0): ATA status: 00 ()
...


$ ps -ax|grep D+
38986  8- D+        0:00.00 zpool iostat 1 10
42143  0- D+        0:00.01 zpool list
42210  1- D+        0:00.01 zpool iostat 1 10
42242  2- D+        0:00.01 zfs list
51117  4- D+        0:00.01 ls
65547  5- D+        0:00.01 zfs unmount -f zStorage1
45501 11  S+        0:00.01 grep D+
65624  9- D+        0:00.01 zfs list
65496  6- D+        0:00.01 ls zStorage1


$ procstat -kk -a 
  PID    TID COMM             TDNAME           KSTACK                       
 1252 100078 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb 
38986 100168 zpool            -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
41360 101772 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb 
41661 100159 gdb              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb 
41662 101773 zpool            -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
42143 100174 zpool            -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
42242 101757 zfs              -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
45485 101761 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb 
45553 100079 procstat         -                <running>                    
51117 101760 ls               -                mi_switch+0xe1 sleepq_wait+0x3a _cv_wait+0x16d dbuf_read+0x45b dmu_buf_hold+0x3d zap_lockdir+0x46 zap_cursor_retrieve+0x131 zfs_freebsd_readdir+0x3e1 VOP_READDIR_APV+0xa7 kern_getdirentries+0x21c sys_getdirentries+0x28 amd64_syscall+0x351 Xfast_syscall+0xfb 
65149 101762 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351 Xfast_syscall+0xfb 
65496 101767 ls               -                mi_switch+0xe1 sleepq_wait+0x3a _cv_wait+0x16d dbuf_read+0x45b dmu_buf_hold+0x3d zap_lockdir+0x46 zap_cursor_retrieve+0x131 zfs_freebsd_readdir+0x3e1 VOP_READDIR_APV+0xa7 kern_getdirentries+0x21c sys_getdirentries+0x28 amd64_syscall+0x351 Xfast_syscall+0xfb 
65624 101755 zfs              -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 


$ gdb zpool
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...(no debugging symbols found)...
(gdb) r list
Starting program: /sbin/zpool list
(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...(no debugging symbols found)...[New LWP 101773]
(no debugging symbols found)...(no debugging symbols found)...
load: 0.14  cmd: zpool 41662 [spa_namespace_lock] 9.17r 0.05u 0.03s 0% 3220k
load: 0.11  cmd: zpool 41662 [spa_namespace_lock] 18011.15r 0.05u 0.03s 0% 3220k
load: 0.64  cmd: zpool 41662 [spa_namespace_lock] 82405.76r 0.05u 0.03s 0% 3220k

Any and all commands that reference zpool volume zStorage1 (on ada4, mounted on /vol/zStorage1) enter D+ state, and ctrl-T (before I exit the session) indicates  the process is stuck in spa_namespace_lock.


The umount [brutally] succeeded (via normal umount, camcontrol reset and hot plug on the SATA drive), but ls still hangs on that directory.  Drive responds normally via camcontrol.


Other zpools/filesystems are operational -- I've resisted rebooting because I only have one affected volume (which I would like back).  This is reoccurring on reboots: my system lasts for about 2 weeks before I start seeing errors related to ZFS.

Read https://wiki.freebsd.org/AvgZfsDeadlockDebug with little help.

Point me in a direction, and I can certainly help research the codebase.
Comment 1 Fabian Keil 2015-10-28 13:46:54 UTC
Can you please attach the complete "procstat -kk -a" output?

I suspect that this is the same issue as:
https://lists.freebsd.org/pipermail/freebsd-current/2014-September/052018.html
Analysis continues at:
https://lists.freebsd.org/pipermail/freebsd-current/2015-October/057982.html

If that's the case and if your applications can handle it,
changing the pool's failmode from "wait" to "continue" may be
worth a try.
Comment 2 Adrian Palmer 2015-10-28 17:34:03 UTC
$ procstat -kk -a 
  PID    TID COMM             TDNAME           KSTACK                       
 1252 100078 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb 
38986 100168 zpool            -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
41360 101772 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb 
41661 100159 gdb              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_wait6+0x5f4 sys_wait4+0x72 amd64_syscall+0x351 Xfast_syscall+0xfb 
41662 101773 zpool            -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
42143 100174 zpool            -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
42242 101757 zfs              -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 
45485 101761 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _sleep+0x27d kern_sigsuspend+0xf4 sys_sigsuspend+0x31 amd64_syscall+0x351 Xfast_syscall+0xfb 
45553 100079 procstat         -                <running>                    
51117 101760 ls               -                mi_switch+0xe1 sleepq_wait+0x3a _cv_wait+0x16d dbuf_read+0x45b dmu_buf_hold+0x3d zap_lockdir+0x46 zap_cursor_retrieve+0x131 zfs_freebsd_readdir+0x3e1 VOP_READDIR_APV+0xa7 kern_getdirentries+0x21c sys_getdirentries+0x28 amd64_syscall+0x351 Xfast_syscall+0xfb 
65149 101762 csh              -                mi_switch+0xe1 sleepq_catch_signals+0xab sleepq_wait_sig+0xf _cv_wait_sig+0x16a tty_wait+0x1c ttydisc_read+0x2d4 ttydev_read+0x86 devfs_read_f+0xeb dofileread+0x95 kern_readv+0x68 sys_read+0x63 amd64_syscall+0x351 Xfast_syscall+0xfb 
65496 101767 ls               -                mi_switch+0xe1 sleepq_wait+0x3a _cv_wait+0x16d dbuf_read+0x45b dmu_buf_hold+0x3d zap_lockdir+0x46 zap_cursor_retrieve+0x131 zfs_freebsd_readdir+0x3e1 VOP_READDIR_APV+0xa7 kern_getdirentries+0x21c sys_getdirentries+0x28 amd64_syscall+0x351 Xfast_syscall+0xfb 
65624 101755 zfs              -                mi_switch+0xe1 sleepq_wait+0x3a _sx_xlock_hard+0x48a _sx_xlock+0x5d spa_all_configs+0x6e zfs_ioc_pool_configs+0x19 zfsdev_ioctl+0x6f0 devfs_ioctl_f+0x114 kern_ioctl+0x255 sys_ioctl+0x13c amd64_syscall+0x351 Xfast_syscall+0xfb 


There isn't much running on the machine at this point.  I'm waiting for ZFS to be a bit more stable.

I'll reconfigure the zpool failmode when I reboot.  It may take a while for the problem to replicate.
Comment 3 emz 2018-04-06 08:35:26 UTC
I'm getting this on a recent r332062, after upgrade from 11.1-RELEASE-p6.
zfs [whatever] commands are freezing, totally:

# zfs list -t all
load: 9.48  cmd: zfs 12668 [spa_namespace_lock] 126.35r 0.00u 0.09s 0% 3912k
load: 7.44  cmd: zfs 12668 [spa_namespace_lock] 438.34r 0.00u 0.10s 0% 3236k

gstat is freezing (but after a minute is able to start showing data, however, it contunues to freeze while showing data).

Since procstat is enormous, I'm attaching it as a file.

All the zfs pools are healthy, at least they were before the upgrade - now I cannot get any output from zpool status.

This behavior is reproducible - onvce I was obvserving it on a less recent -STABLE, but due to the lack of time I was forced to downgrade to 11.1-RELEASE.

ps:

[root@san1:~]# ps ax | grep D
  PID TT  STAT       TIME COMMAND
    0  -  DLs   462:29,65 [kernel]
    2  -  DL      0:00,00 [crypto]
    3  -  DL      0:00,00 [crypto returns]
    4  -  DL     30:55,60 [cam]
    5  -  DL      0:00,00 [soaiod1]
    6  -  DL      0:00,00 [soaiod2]
    7  -  DL      0:00,00 [soaiod3]
    8  -  DL      0:00,00 [soaiod4]
    9  -  DL     38:57,83 [zfskern]
   10  -  DL      0:00,00 [audit]
   13  -  DL      2:42,11 [geom]
   14  -  DL      0:02,46 [usb]
   15  -  DL      0:00,00 [sctp_iterator]
   16  -  DL      0:01,62 [pf purge]
   17  -  DL      0:05,83 [rand_harvestq]
   18  -  DL      0:00,01 [enc_daemon0]
   19  -  DL      0:00,01 [enc_daemon1]
   20  -  DL      0:00,11 [enc_daemon2]
   21  -  DL      0:00,60 [g_mirror swap]
   22  -  DL      1:04,21 [pagedaemon]
   23  -  DL      0:00,20 [vmdaemon]
   24  -  DNL     0:00,00 [pagezero]
   25  -  DL      0:00,04 [bufdaemon]
   26  -  DL      0:00,04 [bufspacedaemon]
   27  -  DL      0:00,13 [syncer]
   28  -  DL      0:00,07 [vnlru]
  740  -  DL      2:19,16 [ctl]
  898  -  DL      0:00,00 [ng_queue]
12174  -  D       0:00,75 /sbin/zfs list -t all
12236  -  D       0:00,78 /sbin/zfs list -t all
12240  -  D       0:00,77 /sbin/zfs list -t all
12338  -  D       0:00,75 /sbin/zfs list -t all
12340  -  D       0:00,75 /sbin/zfs list -t all
12364  -  DL      0:00,00 [ftcleanup]
12368  -  D       0:00,76 /sbin/zfs list -t all
12404  -  D       0:00,75 /sbin/zfs list -t all
12407  -  D       0:00,75 /sbin/zfs list -t all
12440  -  D       0:00,77 /sbin/zfs list -t all
12518  -  D       0:00,02 /usr/sbin/ctladm remove -b block -l 827
12520  -  D       0:00,09 /sbin/zfs list -t all
12554  -  D       0:00,09 /sbin/zfs list -t all
12639  -  D       0:00,08 /sbin/zfs list -t all
12651  -  D       0:00,09 /sbin/zfs list -t all
12716  -  D       0:00,09 zfs list -t snapshot
12721  -  D       0:00,09 /sbin/zfs list -t all
12741  -  D       0:00,00 /sbin/zfs destroy data/reference@ver13_2137
12743  -  D       0:00,00 /usr/sbin/ctladm remove -b block -l 809
12791  -  D       0:00,00 /sbin/zfs list -t all
12558  0  D+      0:00,31 gstat -do
12764  5  D+      0:00,00 gstat -d
12668  2  D+      0:00,09 zfs list -t all
 4898  3  D+      0:28,82 gstat
12814  6  S+      0:00,00 grep D
12737  4  D+      0:00,03 zfs destroy data/kvm/hv34/worker390
12747  8  D+      0:00,00 zfs get origin data/kvm/desktop/desktop-master@desktop23
Comment 4 emz 2018-04-06 08:36:50 UTC
Created attachment 192275 [details]
procstat -kk -a output

procstat -kk -a output
Comment 5 emz 2018-04-06 08:37:29 UTC
Finally I got my zpool status output, showing that the pools are healthy:

[root@san1:~]# zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0 in 28h24m with 0 errors on Thu Feb 15 13:26:36 2018
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0
            da4     ONLINE       0     0     0
            da5     ONLINE       0     0     0
            da6     ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            da7     ONLINE       0     0     0
            da8     ONLINE       0     0     0
            da9     ONLINE       0     0     0
            da10    ONLINE       0     0     0
            da11    ONLINE       0     0     0
          raidz1-2  ONLINE       0     0     0
            da12    ONLINE       0     0     0
            da13    ONLINE       0     0     0
            da14    ONLINE       0     0     0
            da15    ONLINE       0     0     0
            da16    ONLINE       0     0     0

errors: No known data errors

  pool: userdata
 state: ONLINE
  scan: scrub repaired 0 in 1h7m with 0 errors on Wed Feb 14 10:06:19 2018
config:

        NAME               STATE     READ WRITE CKSUM
        userdata           ONLINE       0     0     0
          mirror-0         ONLINE       0     0     0
            gpt/userdata0  ONLINE       0     0     0
            gpt/userdata1  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Mon Aug  7 18:26:15 2017
config:

        NAME            STATE     READ WRITE CKSUM
        zroot           ONLINE       0     0     0
          mirror-0      ONLINE       0     0     0
            gpt/zroot0  ONLINE       0     0     0
            gpt/zroot1  ONLINE       0     0     0

errors: No known data errors
Comment 6 emz 2018-04-06 08:39:33 UTC
Follow-up: now the freeze is gone, and I don't know what is triggering it. While in this freeze, the system is fully responsive and smooth besides the gstat/zfs./zpool commands. After the freeze is over, the gstat/zpool/zfs commands are fully responsive too. I guess this makes the procstat output vital.
Comment 7 Andriy Gapon freebsd_committer freebsd_triage 2018-04-06 11:10:26 UTC
(In reply to emz from comment #6)
A rather wild guess, but I suspect that it may had to do with the hardware.
I see that ZFS was just blocked waiting for I/O.  I do not see intrinsic problems with it.
Maybe a disk took a nap or a command like TRIM blocked its operation for a long time or something of that nature.
A suspicious thread:
    0 101671 kernel              zio_free_issue_3_0  _mtx_lock_spin_cookie+0xc1 callout_lock+0xcb callout_reset_sbt_on+0x79 mprsas_action+0xf2d xpt_run_devq+0x48a xpt_action_default+0x8fc dastart+0x2f3 xpt_run_allocq+0x173 dastrategy+0x8d g_disk_start+0x34f g_io_request+0x2a7 zio_vdev_io_start+0x2ae zio_execute+0xac zio_nowait+0xcb vdev_raidz_io_start+0x6cc zio_vdev_io_start+0x2ae zio_execute+0xac zio_nowait+0xcb

Try to examine your system logs, maybe RAID event log too.
Comment 8 emz 2018-04-06 11:50:16 UTC
I took a thorough look in the logs now, and I really don't see anything suspicious.

Furthermore, this happened exactly after I upgraded to -STABLE (I have an exact answer "Why did you upgrade anyway ? - Because I needed the trasz@ iSCSI patch that I didn't have, and also I wanted to check if the overall situation would improve). Prior to the upgrade the system didn't have this issue. Plus, I've seen this probable regression (I know developers highly dislike this word, but still) a couple of months ago - but then I didn't have the time to investigate, so I just rolled back the revistion to RELEASE and the problem was gone. I'm sure it would be gone if I will downgrade this time too (not that I plan too).

Also, I checked with iSCSI clients and my colleagues told me the iSCSI sysbsystem was serving data just fine during this lockup. And I shoudl notice that in my opinion, I would get the read/write errors on the zpool disks in case they are timing out - at least I'm getting these in the situations when the hardware is really faulty (not on this server).

Just in case (very probable) I will get more similar lockups - what else can I do to diagnose the problem ? To rule out the hardware, or, if I'm still mistaken, to prove it is really it ?
Comment 9 Andriy Gapon freebsd_committer freebsd_triage 2018-04-06 12:23:36 UTC
(In reply to emz from comment #8)
Run kgdb and check what thread owns spa_namespace_lock.
The switch to that thread and get its backtrace.
That's in addition to capturing procstat -kk -a.

If you can afford some down time, it may also be useful to trigger a kernel panic and save a crash dump.  The crash dump is most useful when you have debugging symbols for the kernel and modules.
Comment 10 Alexandre Kovalenko 2018-05-20 17:53:04 UTC
I am seeing this (or similar) behavior on r333641 with sys/dev/vt rolled back to r329305 (could not get console to work without the latter). I have dropped into DDB and spa_namespace_lock is held by zfsd thread. I will be posting procstat output and pictures of the backtrace shortly. Unfortunately, I do not have serial console, so the pictures are all I got. I have induced panic from DDB, so I have a dump as well. I can upload it somewhere or make it available for download if it is of any value.
Comment 11 Alexandre Kovalenko 2018-05-20 17:54:42 UTC
Created attachment 193564 [details]
procstat output

procstat output
Comment 12 Alexandre Kovalenko 2018-05-20 18:13:42 UTC
Created attachment 193568 [details]
Lock owner
Comment 13 Alexandre Kovalenko 2018-05-20 18:15:17 UTC
Created attachment 193569 [details]
Backtrace (part 1)
Comment 14 Alexandre Kovalenko 2018-05-20 18:15:45 UTC
Created attachment 193570 [details]
Backtrace (part 2)
Comment 15 Andriy Gapon freebsd_committer freebsd_triage 2018-05-20 21:23:51 UTC
(In reply to Alexandre Kovalenko from comment #11)
So, it's:
   13 100015 geom                g_event             mi_switch+0xe6 sleepq_wait+0x2c _sx_xlock_hard+0x314 zvol_geom_access+0x148 g_access+0x1fd g_eli_read_metadata+0x249 g_eli_config+0x3ed g_run_events+0x13e fork_exit+0x83 fork_trampoline+0xe 
  930 100354 zfsd                -                   mi_switch+0xe6 sleepq_wait+0x2c _sleep+0x23e g_access+0xf7 vdev_geom_attach+0x61c vdev_attach_ok+0x29 vdev_geom_open+0x394 vdev_open+0x115 vdev_open_children+0x30 vdev_root_open+0x3a vdev_open+0x115 spa_ld_open_vdevs+0x5e spa_ld_mos_init+0x1be spa_ld_mos_with_trusted_config+0x19 spa_load+0x5c spa_load_best+0x6b spa_open_common+0x11d spa_get_stats+0x4f 

I think I know what's going on in your case (not sure if it's the same a as previous reports in this bug).  It's probably a consequence of base r330977, a fix to bug 225960.
I didn't fully realize at that time, but that commit introduced a "g_access lock" in disguise.
So, we went from a LOR between geom topology lock and spa_namespace_lock to a race caused by dropping the topology lock to a LOR between spa_namespace_lock and "g_access lock".

The toughest part now is to decide how to solve the LOR without re-introducing the race.  Or alternatively, how to solve the race without introducing a deadlock.
Comment 16 Andriy Gapon freebsd_committer freebsd_triage 2018-05-20 21:34:34 UTC
(In reply to Alexandre Kovalenko from comment #10)
Alexandre, could you please open a new bug report for your issue?
It's completely different from the original bug report (comment #0).
Comment 17 Alexandre Kovalenko 2018-05-20 22:55:45 UTC
Created Bug 228384 and moved relevant bits there.
Comment 18 Andriy Gapon freebsd_committer freebsd_triage 2018-12-06 14:00:03 UTC
Is this, comment#0 or comment #3, still an issue?
Comment 19 emz 2018-12-06 15:27:34 UTC
As for me - nope, this isn't reproducible anymore since summer revisions.
Comment 20 Andriy Gapon freebsd_committer freebsd_triage 2018-12-06 18:02:07 UTC
(In reply to emz from comment #19)
Thank you for the feedback. I am closing this bug. Please re-open if that issue comes back.
Comment 21 Eugene M. Zheganin 2020-09-01 14:31:48 UTC
Got this again on 12.1-RELEASE, exactly same symptoms (another hardware installation, however). Is anyone except me interested in digging into this ?

root@cg-mr-prod-stg01:/usr/src# ps ax | grep zfs\ list
24931  -  S       0:00.01 /usr/local/bin/sudo /sbin/zfs list -t all
24932  -  D       0:00.35 /sbin/zfs list -t all
24933  -  S       0:00.01 /usr/local/bin/sudo /sbin/zfs list -t all
24934  -  D       0:00.33 /sbin/zfs list -t all
24935  -  S       0:00.01 /usr/local/bin/sudo /sbin/zfs list -t all
24936  -  D       0:00.24 /sbin/zfs list -t all
24937  -  S       0:00.01 /usr/local/bin/sudo /sbin/zfs list -t all
24938  -  D       0:00.23 /sbin/zfs list -t all
24939  -  S       0:00.01 /usr/local/bin/sudo /sbin/zfs list -t all
24940  -  D       0:00.05 /sbin/zfs list -t all
21746  7  T       0:00.00 zfs list -t all
24955  7  R+      0:00.00 grep zfs list
24950  3  D+      0:00.00 zfs list -t all
root@cg-mr-prod-stg01:/usr/src# procstat -kk 21746
  PID    TID COMM                TDNAME              KSTACK                       
21746 106154 zfs                 -                   mi_switch+0xe2 thread_suspend_check+0x297 exit1+0x87 sigexit+0xb28 postsig+0x304 ast+0x327 doreti_ast+0x1f