Bug 255048 - arm64/ROCKPRO64 freeze during heavy IO on external USB 3 disk
Summary: arm64/ROCKPRO64 freeze during heavy IO on external USB 3 disk
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 13.0-STABLE
Hardware: arm64 Any
: --- Affects Only Me
Assignee: freebsd-arm (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-04-14 11:17 UTC by Henri Hennebert
Modified: 2021-04-29 10:22 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Henri Hennebert 2021-04-14 11:17:26 UTC

    
Comment 1 Henri Hennebert 2021-04-14 11:41:29 UTC
FreeBSD keystone.lab.bel 13.0-STABLE FreeBSD 13.0-STABLE #0 stable/13-n245189-236d1f8c1773

The disk is under zfs

During a `git pull` in /usr/ports the system show the da0 disk 100% busy and
afer ~2 minutes the system freeze.

The system respond to ping

I connect on the serial console (CR ~ ^B) to ddb:

db> show msgbuf
...
swap_pager: cannot allocate bio
swap_pager: cannot allocate bio
swap_pager: cannot allocate bio
swap_pager: cannot allocate bio
swap_pager: cannot allocate bio
swap_pager: cannot allocate bio

db> show allchains
chain 1:
 thread 115444 (pid 87479, git) is sleeping on 0xffff0000ef3dfc08 "aw.aew_cv"
chain 2:
 thread 115446 (pid 87479, git) is sleeping on 0xffffa00058d653e8 "zio->io_cv"
chain 3:
 thread 115447 (pid 87479, git) is sleeping on 0xffffa000b26cf3e8 "zio->io_cv"
chain 4:
 thread 115450 (pid 87479, git) is sleeping on 0xffff0000ef3cbc08 "aw.aew_cv"
chain 5:
 thread 115452 (pid 87479, git) is sleeping on 0xffffa000091518b8 "zio->io_cv"
chain 6:
 thread 115453 (pid 87479, git) is sleeping on 0xffff0000ef3e9c08 "aw.aew_cv"
chain 7:
 thread 115455 (pid 87479, git) is sleeping on 0xffffa00058d028b8 "zio->io_cv"
chain 8:
 thread 115458 (pid 87479, git) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 115455 (pid 87479, git) is sleeping on 0xffffa00058d028b8 "zio->io_cv"
chain 9:
 thread 115459 (pid 87479, git) is sleeping on 0xffffa000971718b8 "zio->io_cv"
chain 10:
 thread 115460 (pid 87479, git) is sleeping on 0xffff00010420dc08 "aw.aew_cv"
chain 11:
 thread 115462 (pid 87479, git) is sleeping on 0xffff000104217c08 "aw.aew_cv"
chain 12:
 thread 115463 (pid 87479, git) is sleeping on 0xffff00010421cc08 "aw.aew_cv"
chain 13:
 thread 101506 (pid 1214, god.plugin) is sleeping on 0xffff0000ecdb70a8 "aw.aew_cv"
chain 14:
 thread 101508 (pid 1214, god.plugin) is sleeping on 0xffffa000e430da80 "vmpfw"
chain 15:
 thread 101510 (pid 1214, god.plugin) is sleeping on 0xffff0000ecc5e0a8 "aw.aew_cv"
chain 16:
 thread 101513 (pid 1214, god.plugin) is sleeping on 0xffff0000ecd990a8 "aw.aew_cv"
chain 17:
 thread 101352 (pid 1211, apps.plugin) is sleeping on 0xffffa00058d658b8 "zio->io_cv"
chain 18:
 thread 101448 (pid 1195, httpd) is sleeping on 0xffff0000ecc9f0a8 "aw.aew_cv"
chain 19:
 thread 101366 (pid 1179, httpd) is sleeping on 0xffffa0008c253d88 "zio->io_cv"
chain 20:
 thread 101346 (pid 1141, netdata) is sleeping on 0xffff0000e9424cb8 "aw.aew_cv"
chain 21:
 thread 101355 (pid 1121, sshd) is sleeping on 0xffff000000b24d80 "pfault"
chain 22:
 thread 101350 (pid 1083, sendmail) is sleeping on 0xffff000000b24d80 "vmwait"
chain 23:
 thread 101349 (pid 1079, inetd) is sleeping on 0xffffa00027577b40 "select"
chain 24:
 thread 101347 (pid 1050, ntpd) is sleeping on 0xffff0000e93fd0a8 "aw.aew_cv"
chain 25:
 thread 101314 (pid 970, syslogd) is sleeping on 0xffff000000b24d80 "pfault"
chain 26:
 thread 100152 (pid 31, syncer) is sleeping on 0xffff000000d5f480 "syncer"
chain 27:
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
 thread 100151 (pid 30, vnlru) is blocked on lockmgr  DEXCL
... 
...continue for at least 1400 lines.
...
db> show proc 30
Process 30 (vnlru) at 0xffffa00006449000:
 state: NORMAL
 uid: 0  gids: 0
 parent: pid 0 at 0xffff000000cf1828
 ABI: null
 flag: 0x10000204  flag2: 0
 reaper: 0xffff000000cf1828 reapsubtree: 30
 sigparent: 20
 vmspace: 0xffff000000cf22c8
   (map 0xffff000000cf22c8)
   (map.pmap 0xffff000000cf2388)
   (pmap 0xffff000000cf23e8)
 threads: 1
100151                   D       vlruwt  0xffffa00006449000  [vnlru]
db> tr 100151
Tracing pid 30 tid 100151 td 0xffffa0000660f580
sched_switch() at mi_switch+0xf0
         pc = 0xffff0000003dc7c0  lr = 0xffff0000003b2544
         sp = 0xffff0000da1fc760  fp = 0xffff0000da1fc7b0
 
mi_switch() at sleepq_timedwait+0x28
         pc = 0xffff0000003b2544  lr = 0xffff00000040ce64
         sp = 0xffff0000da1fc7c0  fp = 0xffff0000da1fc7f0
 
sleepq_timedwait() at _sleep+0x194
         pc = 0xffff00000040ce64  lr = 0xffff0000003b1960
         sp = 0xffff0000da1fc800  fp = 0xffff0000da1fc830
 
_sleep() at vnlru_proc+0xc54
         pc = 0xffff0000003b1960  lr = 0xffff0000004a4c18
         sp = 0xffff0000da1fc840  fp = 0xffff0000da1fc8f0
 
vnlru_proc() at fork_exit+0x88
         pc = 0xffff0000004a4c18  lr = 0xffff0000003547d0
         sp = 0xffff0000da1fc900  fp = 0xffff0000da1fc950
--More--
fork_exit() at fork_trampoline+0x10
         pc = 0xffff0000003547d0  lr = 0xffff0000007cfef8
         sp = 0xffff0000da1fc960  fp = 0x0000000000000000

db> show mount
0xffff0000e663e100 keystone/ROOT/default on / (zfs)
0xffff0000bbb25100 devfs on /dev (devfs)
0xffff0000e663d600 procfs on /proc (procfs)
0xffff0000e663cb00 fdescfs on /dev/fd (fdescfs)
0xffff0000e9a4e100 keystone/var/spool on /var/spool (zfs)
0xffff0000e9a4d600 keystone/var/log on /var/log (zfs)
0xffff0000bbb24600 keystone/tmp on /tmp (zfs)
0xffff0000e9a48100 keystone/usr/local on /usr/local (zfs)
0xffff0000e663c000 keystone/var/db on /var/db (zfs)
0xffff0000e9a47600 keystone/home on /home (zfs)
0xffff0000bbb23b00 keystone/usr/ports on /usr/ports (zfs)
0xffff0000e9a4c000 keystone/usr/src on /usr/src (zfs)
0xffff0000e9a59100 keystone/var/tmp on /var/tmp (zfs)
0xffff0000e9a68100 keystone/usr/obj on /usr/obj (zfs)
0xffff0000e9a4cb00 keystone/var/imap on /var/imap (zfs)
0xffff0000e9a4b100 keystone/var/audit on /var/audit (zfs)
0xffff0000bbb23000 keystone/var/spool/openldap on /var/spool/openldap (zfs)
0xffff0000e9a58600 keystone/var/spool/httpd on /var/spool/httpd (zfs)
0xffff0000e9a46b00 keystone/var/spool/imap on /var/spool/imap (zfs)
0xffff0000ea23a100 keystone/var/spool/postgres on /var/spool/postgres (zfs)
0xffff0000ea2ef100 keystone/poudriere on /usr/local/poudriere (zfs)
0xffff0000ea239600 keystone/poudriere/data on /usr/local/poudriere/data (zfs)
0xffff0000e9a57b00 keystone/poudriere/ports on /usr/local/poudriere/ports (zfs)
0xffff0000e9a4a600 keystone/poudriere/jails on /usr/local/poudriere/jails (zfs)
0xffff0000ea238b00 keystone/poudriere/ports/default on /usr/local/poudriere/ports/default (zfs)
0xffff0000ea238000 keystone/poudriere/data/packages on /usr/local/poudriere/data/packages (zfs)
0xffff0000ea2ee600 keystone/poudriere/data/cache on /usr/local/poudriere/data/cache (zfs)
0xffff0000ea2edb00 keystone/poudriere/data/logs on /usr/local/poudriere/data/logs (zfs)
0xffff0000e9a57000 keystone/poudriere/data/images on /usr/local/poudriere/data/images (zfs)
0xffff0000e9a66b00 keystone/poudriere/data/.m on /usr/local/poudriere/data/.m (zfs)
0xffff0000e9a49b00 keystone/poudriere/jails/13aarch64 on /usr/local/poudriere/jails/13aarch64 (zfs)
0xffff0000e9a66000 keystone/poudriere/data/wrkdirs on /usr/local/poudriere/data/wrkdirs (zfs)
0xffff0000e9a49000 keystone/usr/ports/distfiles on /usr/ports/distfiles (zfs)

More info: show mount <addr>

db> show page
vm_cnt.v_free_count: 3095
vm_cnt.v_inactive_count: 80
vm_cnt.v_active_count: 287
vm_cnt.v_laundry_count: 57
vm_cnt.v_wire_count: 979462
vm_cnt.v_free_reserved: 1347
vm_cnt.v_free_min: 6266
vm_cnt.v_free_target: 21023
vm_cnt.v_inactive_target: 31534
db> show pageq
pq_free 3095
dom 0 page_cnt 983828 free 3095 pq_act 287 pq_inact 80 pq_laund 57 pq_unsw 0

I encounter the same problem on another ROCKPRO64 under 13.0-RELEASE
Comment 2 Henri Hennebert 2021-04-14 11:52:24 UTC
In case it may be important to the problem:

[root@keystone ~]# sysctl dev.cpu.{0,1,2,3,4,5}.freq
dev.cpu.0.freq: 1416
dev.cpu.1.freq: 1416
dev.cpu.2.freq: 1416
dev.cpu.3.freq: 1416
dev.cpu.4.freq: 1800
dev.cpu.5.freq: 1800
Comment 3 Henri Hennebert 2021-04-14 17:40:41 UTC
I reproduce the freeze, same output _without_ 

"swap_pager: cannot allocate bio"

in msgbuf.

db> sysctl vfs.numvnodes
vfs.numvnodes: 151830
db> sysctl kern.maxvnodes
kern.maxvnodes: 800000
Comment 4 Henri Hennebert 2021-04-15 10:09:58 UTC
After Adding the debugging parafernalia:

options         DEADLKRES
options         INVARIANTS
options         INVARIANT_SUPPORT
options         WITNESS
options         WITNESS_SKIPSPIN
options         DEBUG_LOCKS
options         DEBUG_VFS_LOCKS
options         DIAGNOSTIC

I run the `git pull` which complete successfully.

During this I run:

while :; do  sysctl vfs.numvnodes; sleep 5; done

vfs.numvnodes: 2109
vfs.numvnodes: 2111
vfs.numvnodes: 2111
vfs.numvnodes: 2149
vfs.numvnodes: 5803
vfs.numvnodes: 10623
vfs.numvnodes: 13623
vfs.numvnodes: 18365
vfs.numvnodes: 20625
vfs.numvnodes: 24242
vfs.numvnodes: 28920
vfs.numvnodes: 31991
vfs.numvnodes: 36363
vfs.numvnodes: 38773
vfs.numvnodes: 42396
vfs.numvnodes: 46801
vfs.numvnodes: 47240
vfs.numvnodes: 48841
vfs.numvnodes: 53090
vfs.numvnodes: 54835
vfs.numvnodes: 56418
vfs.numvnodes: 60929
vfs.numvnodes: 63236
vfs.numvnodes: 65361
vfs.numvnodes: 69478
vfs.numvnodes: 71030
vfs.numvnodes: 74277
vfs.numvnodes: 78259
vfs.numvnodes: 78271
vfs.numvnodes: 80496
vfs.numvnodes: 84368
vfs.numvnodes: 86060  /* disk I/O stall */
vfs.numvnodes: 86060
vfs.numvnodes: 86060
vfs.numvnodes: 86060
vfs.numvnodes: 88370
vfs.numvnodes: 91792
vfs.numvnodes: 92484
vfs.numvnodes: 93071
vfs.numvnodes: 97131
vfs.numvnodes: 100652
vfs.numvnodes: 101711
vfs.numvnodes: 105396
vfs.numvnodes: 107957 /* disk I/O stall */
vfs.numvnodes: 107957
vfs.numvnodes: 107957
vfs.numvnodes: 107957
vfs.numvnodes: 107957
vfs.numvnodes: 108053
vfs.numvnodes: 111744
vfs.numvnodes: 115265
vfs.numvnodes: 115265
vfs.numvnodes: 115265
vfs.numvnodes: 115265
vfs.numvnodes: 115265
vfs.numvnodes: 117571
vfs.numvnodes: 121308
vfs.numvnodes: 122212 /* disk I/O stall */
vfs.numvnodes: 122212
vfs.numvnodes: 122212
vfs.numvnodes: 122212
vfs.numvnodes: 122212
vfs.numvnodes: 122212
vfs.numvnodes: 123161
vfs.numvnodes: 126698
vfs.numvnodes: 129217
vfs.numvnodes: 129219 /* disk I/O stall */
vfs.numvnodes: 129219
vfs.numvnodes: 129219
vfs.numvnodes: 129219
vfs.numvnodes: 129219
vfs.numvnodes: 129219
vfs.numvnodes: 131464
vfs.numvnodes: 134805
vfs.numvnodes: 136529
vfs.numvnodes: 136529
vfs.numvnodes: 136529
vfs.numvnodes: 137909
vfs.numvnodes: 141831
vfs.numvnodes: 143280
vfs.numvnodes: 143280
vfs.numvnodes: 143280
vfs.numvnodes: 143355
vfs.numvnodes: 146508
vfs.numvnodes: 149563
vfs.numvnodes: 149567
vfs.numvnodes: 149567
vfs.numvnodes: 149567
vfs.numvnodes: 152239
vfs.numvnodes: 106275
vfs.numvnodes: 666
vfs.numvnodes: 670
vfs.numvnodes: 3239
vfs.numvnodes: 3272
vfs.numvnodes: 3273
vfs.numvnodes: 3273
vfs.numvnodes: 3277
vfs.numvnodes: 3277
vfs.numvnodes: 3277
vfs.numvnodes: 3277
vfs.numvnodes: 3277
vfs.numvnodes: 3277
vfs.numvnodes: 3277
vfs.numvnodes: 3277

And on dmesg I get:

Expensive timeout(9) function: 0xffff0000003e9654(0) 11.542417707 s

Going to ddb:

db> br 0xffff0000003e9654
db> show br
 Map      Count    Address
*ffff000000da1190     1    pffasttimo
db> del 0xffff0000003e9654
db> c
Comment 5 Henri Hennebert 2021-04-17 10:08:05 UTC
New freeze, new info after git pull:

dmesg:
pid 1736 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1391 (squid), jid 0, uid 100, was killed: out of swap space
pid 1168 (named), jid 0, uid 53, was killed: out of swap space
pid 3059 (postgres), jid 0, uid 770, was killed: out of swap space
pid 24693 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1661 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1663 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1658 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1659 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1660 (postgres), jid 0, uid 770, was killed: out of swap space
pid 1655 (postgres), jid 0, uid 770, was killed: out of swap space
KDB: enter: Break to debugger

db> show allchains
...
thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
 thread 101295 (pid 27517, bash) is blocked on lockmgr EXCL
...
more than 16000 lines...

db> show freepages
DOMAIN: 0
FREE LIST 0:

  ORDER (SIZE)  |  NUMBER
                |  POOL 0  |  POOL 1
--            -- --      -- --      --
  11 (008192K)  |  000000  |  000000
  10 (004096K)  |  000000  |  000000
  09 (002048K)  |  000000  |  000000
  08 (001024K)  |  000000  |  000000
  07 (000512K)  |  000000  |  000000
  06 (000256K)  |  000000  |  000000
  05 (000128K)  |  000000  |  000000
  04 (000064K)  |  000000  |  000001
  03 (000032K)  |  000000  |  000003
  02 (000016K)  |  000019  |  000020
  01 (000008K)  |  000094  |  000083
  00 (000004K)  |  001060  |  000605
db> panic
Comment 6 Henri Hennebert 2021-04-22 09:21:58 UTC
New freeze with different ddb

For a complete log of ddb see http://tignes.restart.be/Xfer/keystone-2021-04-22.log

Some extract from this log:

db> show alllocks
Process 76413 (git) thread 0xffffa0009ae9e580 (102119)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009d05d070) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa0009ae9e000 (102120)
shared lockmgr zfs (zfs) r = 0 (0xffffa00090e88440) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000c45bf580 (102121)
shared lockmgr zfs (zfs) r = 0 (0xffffa00023c7b628) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000c45bf000 (102122)
shared lockmgr zfs (zfs) r = 0 (0xffffa00023c7b628) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa00008f1f000 (102124)
shared lockmgr zfs (zfs) r = 0 (0xffffa00098c2d440) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000ebf14580 (102125)
shared lockmgr zfs (zfs) r = 0 (0xffffa0005725a810) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa00023e8f580 (102126)
shared lockmgr zfs (zfs) r = 0 (0xffffa0001472f9f8) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000b2a80580 (102127)
shared lockmgr zfs (zfs) r = 0 (0xffffa000b51ae628) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000b2a80000 (102128)
shared lockmgr zfs (zfs) r = 0 (0xffffa000b51ae440) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000ce44f580 (102129)
shared lockmgr zfs (zfs) r = 0 (0xffffa000a5491258) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000ce44f000 (102130)
shared lockmgr zfs (zfs) r = 0 (0xffffa000b51ae9f8) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000bee77580 (102131)
shared lockmgr zfs (zfs) r = 0 (0xffffa000948b9070) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000bee77000 (102132)
shared lockmgr zfs (zfs) r = 0 (0xffffa00099eed440) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000665df580 (102133)
shared lockmgr zfs (zfs) r = 0 (0xffffa000b51ae810) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa000665df000 (102134)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009566cbe0) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa00040981580 (102135)
shared lockmgr zfs (zfs) r = 0 (0xffffa0004c0ef810) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa00040981000 (102136)
shared lockmgr zfs (zfs) r = 0 (0xffffa0002f3f8258) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 76413 (git) thread 0xffffa00048c3b000 (102138)
shared lockmgr zfs (zfs) r = 0 (0xffffa000f0ee9628) locked @ /usr/src/sys/kern/vfs_cache.c:4854
Process 1213 (god.plugin) thread 0xffffa000a9c86580 (101479)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009ae859f8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1213 (god.plugin) thread 0xffffa000a9d37000 (101483)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009ae859f8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1213 (god.plugin) thread 0xffffa000a9c86000 (101484)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009ae859f8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1213 (god.plugin) thread 0xffffa000eb352580 (101486)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009ae859f8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1213 (god.plugin) thread 0xffffa00074fae000 (101537)
shared lockmgr zfs (zfs) r = 0 (0xffffa0009ae859f8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1202 (sshd) thread 0xffffa0001c2c5580 (101328)
shared lockmgr zfs (zfs) r = 0 (0xffffa0000a7fc070) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1201 (getty) thread 0xffffa0000a605580 (100543)
shared lockmgr zfs (zfs) r = 0 (0xffffa00070c2f9f8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1137 (netdata) thread 0xffffa0009ae88580 (101439)
shared lockmgr zfs (zfs) r = 0 (0xffffa00023fe3070) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1137 (netdata) thread 0xffffa00074e35580 (101440)
shared lockmgr zfs (zfs) r = 0 (0xffffa00023fe3070) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1137 (netdata) thread 0xffffa0001c2c5000 (101442)
shared lockmgr zfs (zfs) r = 0 (0xffffa0002384bbe0) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1137 (netdata) thread 0xffffa000975fe580 (101446)
shared lockmgr zfs (zfs) r = 0 (0xffffa0000a661628) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 1082 (sendmail) thread 0xffffa0001c375580 (101334)
shared lockmgr zfs (zfs) r = 0 (0xffffa00023da2628) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 966 (syslogd) thread 0xffffa0000c93c580 (100548)
shared lockmgr zfs (zfs) r = 0 (0xffffa0001c0e6810) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 784 (devd) thread 0xffffa0000c82d580 (100546)
shared lockmgr zfs (zfs) r = 0 (0xffffa0002380b070) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 26 (pagedaemon) thread 0xffffa000056ba580 (100152)
shared lockmgr zfs (zfs) r = 0 (0xffffa00037740dc8) locked @ /usr/src/sys/kern/vfs_subr.c:2974
Process 0 (kernel) thread 0xffffa0000537e000 (100104)
exclusive lockmgr zfs (zfs) r = 0 (0xffffa00036915070) locked @ /usr/src/sys/kern/vfs_subr.c:1589
exclusive sx arc vnlru lock (arc vnlru lock) r = 0 (0xffff00000146bc38) locked @ /usr/src/sys/contrib/openzfs/module/os/freebsd/zfs/arc_os.c:163
db> show lockedvnods
Locked vnodes
vnode 0xffffa0000a6615b8: type VREG
    usecount 2, writecount 0, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0000a52a840 ref 138 pages 6 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa0000a7fc000: type VREG
    usecount 2, writecount 0, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0000c9fa948 ref 91 pages 5 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa0002380b000: type VREG
    usecount 3, writecount -4, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa000239d3318 ref 5 pages 1 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa0001c0e67a0: type VREG
    usecount 3, writecount -3, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0001c1b5210 ref 4 pages 1 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa00023da25b8: type VREG
    usecount 4, writecount -6, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0001c1b0318 ref 7 pages 1 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa00037740d58: type VREG
    usecount 3, writecount 2, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0003777fe70 ref 2 pages 8 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa00070c2f988: type VREG
    usecount 3, writecount -4, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0003712cc60 ref 5 pages 1 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa00023fe3000: type VREG
    usecount 5, writecount -10, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa000372b1528 ref 11 pages 42 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 2)
vnode 0xffffa0002384bb70: type VREG
    usecount 2, writecount 0, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa0002387b840 ref 20 pages 1 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 1)
vnode 0xffffa0009ae85988: type VREG
    usecount 7, writecount -3, refcount 2 seqc users 0
    hold count flags ()
    flags ()
    v_object 0xffffa00070eb2108 ref 9 pages 38 cleanbuf 0 dirtybuf 0
    lock type zfs: SHARED (count 5)
vnode 0xffffa0001472f988: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa00023c7b5b8: type VDIR
    usecount 2, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 2)
vnode 0xffffa0004c0ef7a0: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa0002f3f81e8: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa0005725a7a0: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa00036915000: type VREG
    usecount 0, writecount 0, refcount 1 seqc users 1
    hold count flags ()
    flags (VIRF_DOOMED)
    lock type zfs: EXCL by thread 0xffffa0000537e000 (pid 0, kernel, tid 100104)
vnode 0xffffa00090e883d0: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000948b9000: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa00098c2d3d0: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa0009d05d000: type VDIR
    usecount 1, writecount 0, refcount 1 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa00099eed3d0: type VDIR
    usecount 1, writecount 0, refcount 1 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000a54911e8: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000f0ee95b8: type VDIR
    usecount 1, writecount 0, refcount 1 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa0009566cb70: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000b51ae3d0: type VDIR
    usecount 1, writecount 0, refcount 1 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000b51ae5b8: type VDIR
    usecount 1, writecount 0, refcount 1 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000b51ae7a0: type VDIR
    usecount 1, writecount 0, refcount 1 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
vnode 0xffffa000b51ae988: type VDIR
    usecount 1, writecount 0, refcount 2 seqc users 0 mountedhere 0
    hold count flags ()
    flags ()
    lock type zfs: SHARED (count 1)
db> show allchains
...
chain 2:
 thread 102119 (pid 76413, git) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102131 (pid 76413, git) is sleeping on 0xffffa0009b04bd88 "zio->io_cv"
chain 3:
 thread 102120 (pid 76413, git) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102131 (pid 76413, git) is sleeping on 0xffffa0009b04bd88 "zio->io_cv"
...
chain 11:
 thread 102129 (pid 76413, git) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102131 (pid 76413, git) is sleeping on 0xffffa0009b04bd88 "zio->io_cv"
...
chain 17:
 thread 102135 (pid 76413, git) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102131 (pid 76413, git) is sleeping on 0xffffa0009b04bd88 "zio->io_cv"
...
chain 388:
 thread 100102 (pid 0, arc_prune_0) is blocked on sx "arc vnlru lock" XLOCK
 thread 100104 (pid 0, arc_prune_2) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102124 (pid 76413, git) is sleeping on 0xffffa0009b6493e8 "zio->io_cv"
chain 389:
 thread 100103 (pid 0, arc_prune_1) is blocked on sx "arc vnlru lock" XLOCK
 thread 100104 (pid 0, arc_prune_2) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102124 (pid 76413, git) is sleeping on 0xffffa0009b6493e8 "zio->io_cv"
chain 390:
 thread 100104 (pid 0, arc_prune_2) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102124 (pid 76413, git) is sleeping on 0xffffa0009b6493e8 "zio->io_cv"
chain 391:
 thread 100105 (pid 0, arc_prune_3) is blocked on sx "arc vnlru lock" XLOCK
 thread 100104 (pid 0, arc_prune_2) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102124 (pid 76413, git) is sleeping on 0xffffa0009b6493e8 "zio->io_cv"
chain 392:
 thread 100106 (pid 0, arc_prune_4) is blocked on sx "arc vnlru lock" XLOCK
 thread 100104 (pid 0, arc_prune_2) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102124 (pid 76413, git) is sleeping on 0xffffa0009b6493e8 "zio->io_cv"
chain 393:
 thread 100107 (pid 0, arc_prune_5) is blocked on sx "arc vnlru lock" XLOCK
 thread 100104 (pid 0, arc_prune_2) is blocked on sx "zfsvfs->z_hold_mtx[i]" XLOCK
 thread 102124 (pid 76413, git) is sleeping on 0xffffa0009b6493e8 "zio->io_cv"
...
db> show msgbuf
...
KDB: enter: Break to debugger
lock order reversal:
 1st 0xffffa00063d67070 zfs (zfs, lockmgr) @ /usr/src/sys/kern/vfs_mount.c:1071
 2nd 0xffffa000eb5ed9f8 devfs (devfs, lockmgr) @ /usr/src/sys/kern/vfs_mount.c:1083
lock order devfs -> zfs established at:
#0 0xffff0000003968fc at witness_checkorder+0x40c
#1 0xffff00000030b5cc at lockmgr_xlock+0x40
#2 0xffff0000004104a4 at _vn_lock+0x54
#3 0xffff0000003f2c5c at vfs_domount+0xfc4
#4 0xffff0000003f13f8 at vfs_donmount+0x794
#5 0xffff0000003f523c at kernel_mount+0x48
#6 0xffff0000003f7790 at parse_mount+0x494
#7 0xffff0000003f5e4c at vfs_mountroot+0x3d8
#8 0xffff0000002ce638 at start_init+0x24
#9 0xffff0000002f6ec0 at fork_exit+0x64
#10 0xffff0000006aff28 at fork_trampoline+0x10
lock order zfs -> devfs attempted at:
#0 0xffff0000003970e8 at witness_checkorder+0xbf8
#1 0xffff00000030b5cc at lockmgr_xlock+0x40
#2 0xffff0000004104a4 at _vn_lock+0x54
#3 0xffff0000003f2c5c at vfs_domount+0xfc4
#4 0xffff0000003f13f8 at vfs_donmount+0x794
#5 0xffff0000003f0c24 at sys_nmount+0x60
#6 0xffff0000006b0b34 at do_el0_sync+0x7f8
#7 0xffff000000696a1c at handle_el0_sync+0x90
db> dump
Dumping 3481 out of 3941 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91%
Dump complete