Bug 214923

Summary: kqueue hangs with busy loop
Product: Base System Reporter: yjh0502
Component: kernAssignee: Konstantin Belousov <kib>
Status: Closed FIXED    
Severity: Affects Only Me CC: cem, emilio, kib, op
Priority: ---    
Version: 11.0-RELEASE   
Hardware: amd64   
OS: Any   
Attachments:
Description Flags
Do not clear KN_INFLUX when note is in flux during scan. none

Description yjh0502 2016-11-29 08:17:19 UTC
I used watchman[1] for monitoring file changes, and watchman process hangs on infinite loop occasionally. Here's dtrace trace for hanging process.

---
root@hostname:~ # dtrace -n 'profile-997 /pid == 85473/ { @[stack()] = count() }'
dtrace: description 'profile-997 ' matched 1 probe
^C


              kernel`kqueue_kevent+0x369
              kernel`0xffffffff80f8442b
                4

              kernel`kqueue_kevent+0x37b
              kernel`0xffffffff80f8442b
                8

              kernel`kqueue_kevent+0x3b5
              kernel`0xffffffff80f8442b
               14

              kernel`kqueue_kevent+0xda1
              kernel`0xffffffff80f8442b
               25

              kernel`kqueue_kevent+0xdc9
              kernel`0xffffffff80f8442b
               29

              kernel`kqueue_kevent+0x44b
              kernel`0xffffffff80f8442b
               31

              kernel`kqueue_kevent+0x3c7
              kernel`0xffffffff80f8442b
               45

              kernel`kqueue_kevent+0x43c
              kernel`0xffffffff80f8442b
               49

              kernel`kqueue_kevent+0x38c
              kernel`0xffffffff80f8442b
               52

              kernel`kqueue_kevent+0x453
              kernel`0xffffffff80f8442b
               54

              kernel`kqueue_kevent+0x2f0
              kernel`0xffffffff80f8442b
               57
              kernel`kqueue_kevent+0xda3
              kernel`0xffffffff80f8442b
               58

              kernel`kqueue_kevent+0xdaa
              kernel`0xffffffff80f8442b
               58

              kernel`kqueue_kevent+0x3b8
              kernel`0xffffffff80f8442b
               58

              kernel`kqueue_kevent+0x3bd
              kernel`0xffffffff80f8442b
               59

              kernel`kqueue_kevent+0x444
              kernel`0xffffffff80f8442b
               65

              kernel`kqueue_kevent+0xdd7
              kernel`0xffffffff80f8442b
               67

              kernel`kqueue_kevent+0x3cb
              kernel`0xffffffff80f8442b
               83

              kernel`kqueue_kevent+0x34f
              kernel`0xffffffff80f8442b
               91

              kernel`kqueue_kevent+0x440
              kernel`0xffffffff80f8442b
               94

              kernel`kqueue_kevent+0x348
              kernel`0xffffffff80f8442b
               97

              kernel`kqueue_kevent+0xd93
              kernel`0xffffffff80f8442b
              109

              kernel`kqueue_kevent+0x2f5
              kernel`0xffffffff80f8442b
              113

              kernel`kqueue_kevent+0x393
              kernel`0xffffffff80f8442b
              122

              kernel`kqueue_kevent+0x37f
              kernel`0xffffffff80f8442b
              126

              kernel`kqueue_kevent+0x3cf
              kernel`0xffffffff80f8442b
              126

              kernel`kqueue_kevent+0x356
              kernel`0xffffffff80f8442b
              168

              kernel`kqueue_kevent+0xdd0
              kernel`0xffffffff80f8442b
              266

              kernel`kqueue_kevent+0x382
              kernel`0xffffffff80f8442b
              399

              kernel`kqueue_kevent+0x36d
              kernel`0xffffffff80f8442b
              437

              kernel`kqueue_kevent+0x39a
              kernel`0xffffffff80f8442b
              465

              kernel`kqueue_kevent+0x365
              kernel`0xffffffff80f8442b
              969

              kernel`kqueue_kevent+0x3a4
              kernel`0xffffffff80f8442b
             1243

              kernel`kqueue_kevent+0x3a0
              kernel`0xffffffff80f8442b
             3513

              kernel`kqueue_kevent+0x385
              kernel`0xffffffff80f8442b
             3713

              kernel`kqueue_kevent+0xdde
              kernel`0xffffffff80f8442b
             3751

              kernel`kqueue_kevent+0x371
              kernel`0xffffffff80f8442b
             3752

              kernel`kqueue_kevent+0x3c2
              kernel`0xffffffff80f8442b
             3866

              kernel`kqueue_kevent+0x438
              kernel`0xffffffff80f8442b
             3890

              kernel`kqueue_kevent+0x35d
              kernel`0xffffffff80f8442b
             3891

              kernel`kqueue_kevent+0x3ab
              kernel`0xffffffff80f8442b
             3934

              kernel`kqueue_kevent+0xd9a
              kernel`0xffffffff80f8442b
             4046

              kernel`kqueue_kevent+0x3ba
              kernel`0xffffffff80f8442b
             4067

              kernel`kqueue_kevent+0x3ae
              kernel`0xffffffff80f8442b
             4109

              kernel`kqueue_kevent+0xdc2
              kernel`0xffffffff80f8442b
             4115

              kernel`kqueue_kevent+0x341
              kernel`0xffffffff80f8442b
             4125

              kernel`kqueue_kevent+0x447
              kernel`0xffffffff80f8442b
             9275

              kernel`kqueue_kevent+0x374
              kernel`0xffffffff80f8442b
            11043

              kernel`kqueue_kevent+0x3a8
              kernel`0xffffffff80f8442b
            17816
---

Here's ps output
---
root      85473 100.0  0.2   47424   3760  -  L    19Oct16     13:10.30 /usr/local/bin/watchman -p -j --server-encoding=json --output-encoding=bser
---

`top` shows that watchman process spins on kqueue call
---
  PID USERNAME     THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
85473 root           4  20    0 47424K  3760K *kqueu  2  13:32  97.82% watchman
---

The host uses zfs filesystem on root. In this case problem arises on 11.0-RELEASE-p3, but I faced the problem from 10.3 releases. The host runs on top of KVM hypervisor. `dmesg` does not show any logs from kernel. Please let me know if I'm missing some information.



[1] https://facebook.github.io/watchman/
Comment 1 Konstantin Belousov freebsd_committer freebsd_triage 2016-11-29 09:28:18 UTC
(In reply to yjh0502 from comment #0)
I do not see a clean indicator that the process 'spins on the kqueue syscall'.  The wait channel demonstrates that the process waits for some event subsystem mutex (the star before the name), but due to the truncated output, I am not sure which mutex is it.  It could kqueue lock, or kqueue global lock, for instance.

That said, WCPU 100% indicates that there is some other thread in the process which consumes cycles, which is consistent with the report of four threads in the process.

From that data, I am not even convinced that this is a kernel bug and not some application issue.  For very beginning, look at the kernel backtraces of all threads in your process with procstat -kk.  Find and backtrace the lock owner of a kqueue lock.  All that assuming that you believe that the issue is a kernel problem.  Otherwise, and I would start with that route, would I debug it myself, is to look at the app and see what it is doing (wrong).
Comment 2 yjh0502 2016-11-29 12:02:14 UTC
I thought something goes wrong in kernel because the process is not responding on the signal because the process is in uninterruptible state, so pkill -9 watchman cannot kill the process. I coundn't even attach gdb to the process. There were some other symptoms like accessing file system hangs on uninterruptible state, for example editing file with vi on same filesystem hangs when I try to save changes. I should reboot the system to recover the failure.

In Linux if there's any problem on block device, then there are some logs and traces on dmesg. I couldn't find any logs on dmesg, do I thought that it's not block devices' fault.
Comment 3 Konstantin Belousov freebsd_committer freebsd_triage 2016-11-29 12:20:04 UTC
Start with backtracing all threads in your process, as I noted in previous reply.  Even better, backtrace all threads in the deadlocked system.
Comment 4 yjh0502 2016-11-29 12:30:55 UTC
Sorry for additional information. The problem happened on production server, which runs web servers like haproxy. The problem seems to block some listening threads so some of clients cannot communicate with the web server. I rebooted the machine to recover the failure. I'll post procstat -kk output when the problem happens again. Please let me know is there's any other output to identify the problem.

Thanks for your help.
Comment 5 yjh0502 2016-12-15 08:29:42 UTC
Here's output of procstat -kk

root@foo:/var/log # procstat -kk `pgrep watchman`
  PID    TID COMM             TDNAME           KSTACK
85871 100707 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 sbwait+0x4f soreceive_generic+0x1ae dofileread+0x98 kern_readv+0x68 sys_read+0x84 amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 100556 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x10 _cv_timedwait_sig_sbt+0x1c4 seltdwait+0xc7 kern_poll+0x296 sys_poll+0x61 amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 101126 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 kern_sigsuspend+0x134 sys_sigsuspend+0x31 amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 101462 watchman         -                mi_switch+0xd2 sleepq_wait+0x3a _sx_slock_hard+0x325 kern_poll+0x51e sys_poll+0x61 amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 101506 watchman         -                kern_kevent+0xb5 sys_kevent+0x11c amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 101507 watchman         -                mi_switch+0xd2 turnstile_wait+0x408 __mtx_lock_sleep+0x13d knote+0x132 VOP_OPEN_APV+0xb5 vn_open_vnode+0x247 vn_open_cred+0x33e kern_openat+0x25c amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 101567 watchman         -                mi_switch+0xd2 turnstile_wait+0x408 __mtx_lock_sleep+0x13d knote_fdclose+0xb2 closefp+0x40 amd64_syscall+0x4ce Xfast_syscall+0xfb
56477 101568 watchman         -                mi_switch+0xd2 sleepq_wait+0x3a _sx_slock_hard+0x325 kern_poll+0x51e sys_poll+0x61 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100681 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x10 _sleep+0x26f kern_nanosleep+0x10e sys_nanosleep+0x53 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100683 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100684 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 fifo_open+0x244 VOP_OPEN_APV+0x89 vn_open_vnode+0x247 vn_open_cred+0x33e kern_openat+0x25c amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100685 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100724 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100763 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100801 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100884 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100919 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
 1271 100945 watchman         -                mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_wait_sig+0xf _sleep+0x294 umtxq_sleep+0x122 do_lock_umutex+0x174f __umtx_op_wait_umutex+0x72 amd64_syscall+0x4ce Xfast_syscall+0xfb
Comment 6 yjh0502 2016-12-15 08:34:59 UTC
I dumped `procstat -kk` output for all processes:

https://gist.github.com/yjh0502/9aeb1001616ca424bd86df458ed52d19
Comment 7 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-15 19:00:28 UTC
(In reply to yjh0502 from comment #6)
Yes, there is definitely the kernel problem.

Please enable debugging in the kernel, following https://www.freebsd.org/doc/en_US.ISO8859-1/books/developers-handbook/kerneldebug-deadlocks.html.
After reproducing the problem and obtaining the information listed there, please dump core.  I will ask for additional information after that.
Comment 8 yjh0502 2016-12-16 18:01:07 UTC
I got a coredump and backtrace:

root@foo:/usr/obj/usr/src/sys/DEBUG # kgdb kernel.debug /var/crash/vmcore.0
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...

Unread portion of the kernel message buffer:
panic: Bad link elm 0xfffff800158d0780 next->prev != elm
cpuid = 1
KDB: stack backtrace:
#0 0xffffffff80a80947 at kdb_backtrace+0x67
#1 0xffffffff80a3fb52 at vpanic+0x182
#2 0xffffffff80a3fbd3 at panic+0x43
#3 0xffffffff809f9d1a at knote_drop+0x28a
#4 0xffffffff809f9df3 at knote_fdclose+0xd3
#5 0xffffffff809eda25 at closefp+0x65
#6 0xffffffff80ea4426 at amd64_syscall+0x2f6
#7 0xffffffff80e8469b at Xfast_syscall+0xfb
Uptime: 4m58s
Dumping 227 out of 988 MB: (CTRL-C to abort) ..8%..15%..22%..36%..43%..57%..64%..71%..85%..92%

Reading symbols from /boot/kernel/zfs.ko...Reading symbols from /usr/lib/debug//boot/kernel/zfs.ko.debug...done.
done.
Loaded symbols for /boot/kernel/zfs.ko
Reading symbols from /boot/kernel/opensolaris.ko...Reading symbols from /usr/lib/debug//boot/kernel/opensolaris.ko.debug...done.
done.
Loaded symbols for /boot/kernel/opensolaris.ko
Reading symbols from /boot/kernel/uhid.ko...Reading symbols from /usr/lib/debug//boot/kernel/uhid.ko.debug...done.
done.
Loaded symbols for /boot/kernel/uhid.ko
#0  doadump (textdump=<value optimized out>) at pcpu.h:221
221             __asm("movq %%gs:%1,%0" : "=r" (td)
(kgdb) bt
#0  doadump (textdump=<value optimized out>) at pcpu.h:221
#1  0xffffffff80a3f5d0 in kern_reboot (howto=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:366
#2  0xffffffff80a3fb8b in vpanic (fmt=<value optimized out>, ap=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:759
#3  0xffffffff80a3fbd3 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:690
#4  0xffffffff809f9d1a in knote_drop (kn=<value optimized out>, td=<value optimized out>) at /usr/src/sys/kern/kern_event.c:2461
#5  0xffffffff809f9df3 in knote_fdclose (td=0xfffff80026a38a00, fd=232) at /usr/src/sys/kern/kern_event.c:2370
#6  0xffffffff809eda25 in closefp (fdp=0xfffff800079a8450, fd=232, fp=0xfffff80003fe1460, td=0xfffff80026a38a00, holdleaders=1) at /usr/src/sys/kern/kern_descrip.c:1180
#7  0xffffffff80ea4426 in amd64_syscall (td=0xfffff80026a38a00, traced=<value optimized out>) at subr_syscall.c:135
#8  0xffffffff80e8469b in Xfast_syscall () at /usr/src/sys/amd64/amd64/exception.S:396
#9  0x000000080104a7ca in ?? ()
Previous frame inner to this frame (corrupt stack?)
Current language:  auto; currently minimal
(kgdb)

I compiled kernel from git repo, releng/11.0 branch (b416700da72fd8eab0a75d60f0634e254cbb9347)
Comment 9 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-16 18:15:36 UTC
(In reply to yjh0502 from comment #8)
There is a lot more information that needed, which is only easily produced with ddb, it is too laborious to extract it from core.  Please re-read the instructions about ddb commands which output is required.
Comment 10 yjh0502 2016-12-16 19:05:59 UTC
(In reply to Konstantin Belousov from comment #9)

I got a DDB shell, here's a backtrace.

db> bt
Tracing pid 764 tid 100622 td 0xfffff80007ccda00
kdb_enter() at kdb_enter+0x3b/frame 0xfffffe004f1757e0
vpanic() at vpanic+0x19f/frame 0xfffffe004f175860
panic() at panic+0x43/frame 0xfffffe004f1758c0
knote_drop() at knote_drop+0x28a/frame 0xfffffe004f175900
knote_fdclose() at knote_fdclose+0xd3/frame 0xfffffe004f175950
closefp() at closefp+0x65/frame 0xfffffe004f1759a0
amd64_syscall() at amd64_syscall+0x2f6/frame 0xfffffe004f175ab0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe004f175ab0
--- syscall (6, FreeBSD ELF64, sys_close), rip = 0x80104a7ca, rsp = 0x7fffdf9f9b58, rbp = 0x7fffdf9f9b70 ---
db>
Comment 11 yjh0502 2016-12-16 19:13:57 UTC
Here's output from ddb commands.

https://gist.github.com/anonymous/e9612b3238005b3fc89416971d4f9efc
Comment 12 yjh0502 2016-12-16 19:16:34 UTC
Traces for contented threads, 1118(rm) and 764(watchman)

https://gist.github.com/anonymous/f7552caf4d11695b0549cbba635c4b32
Comment 13 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-16 19:50:13 UTC
(In reply to yjh0502 from comment #10)
I do not understand this.  Your backtrace indicates that there was a panic ?  Then where is the panic message and whole console log ?

For the comment 12 with backtraces, the process 764 must have more than one thread, I need to see backtraces of all of them.
Comment 14 yjh0502 2016-12-17 02:32:59 UTC
(In reply to Konstantin Belousov from comment #13)

Sorry for the confusion. Here's stacks for two problematic threads.

db>
Tracing pid 764 tid 100622 td 0xfffff80007ccda00
kdb_enter() at kdb_enter+0x3b/frame 0xfffffe004f1757e0
vpanic() at vpanic+0x19f/frame 0xfffffe004f175860
panic() at panic+0x43/frame 0xfffffe004f1758c0
knote_drop() at knote_drop+0x28a/frame 0xfffffe004f175900
knote_fdclose() at knote_fdclose+0xd3/frame 0xfffffe004f175950
closefp() at closefp+0x65/frame 0xfffffe004f1759a0
amd64_syscall() at amd64_syscall+0x2f6/frame 0xfffffe004f175ab0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe004f175ab0
--- syscall (6, FreeBSD ELF64, sys_close), rip = 0x80104a7ca, rsp = 0x7fffdf9f9b58, rbp = 0x7fffdf9f9b70 ---

db> trace 1118
Tracing pid 1118 tid 100634 td 0xfffff8000c23b500
cpustop_handler() at cpustop_handler+0x28/frame 0xffffffff81d78d30
ipi_nmi_handler() at ipi_nmi_handler+0x4a/frame 0xffffffff81d78d50
trap() at trap+0x3a/frame 0xffffffff81d78f60
nmi_calltrap() at nmi_calltrap+0x8/frame 0xffffffff81d78f60
--- trap 0x13, rip = 0xffffffff80a90104, rsp = 0xfffffe004f1ac6f0, rbp = 0xfffffe004f1ac710 ---
lock_delay() at lock_delay+0x54/frame 0xfffffe004f1ac710
__mtx_lock_sleep() at __mtx_lock_sleep+0x1b8/frame 0xfffffe004f1ac790
__mtx_lock_flags() at __mtx_lock_flags+0x10d/frame 0xfffffe004f1ac7e0
knote() at knote+0x85/frame 0xfffffe004f1ac840
VOP_READDIR_APV() at VOP_READDIR_APV+0x150/frame 0xfffffe004f1ac870
kern_getdirentries() at kern_getdirentries+0x224/frame 0xfffffe004f1ac970
sys_getdirentries() at sys_getdirentries+0x28/frame 0xfffffe004f1ac9a0
amd64_syscall() at amd64_syscall+0x2f6/frame 0xfffffe004f1acab0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe004f1acab0
--- syscall (196, FreeBSD ELF64, sys_getdirentries), rip = 0x80096c69a, rsp = 0x7fffffffe778, rbp = 0x7fffffffe7b0 ---
Comment 15 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-17 17:25:05 UTC
(In reply to yjh0502 from comment #14)

I already asked you:
(In reply to yjh0502 from comment #10)
I do not understand this.  Your backtrace indicates that there was a panic ?  Then where is the panic message and whole console log ?

If there was a panic, provide that data.
Comment 16 yjh0502 2016-12-18 02:58:46 UTC
(In reply to Konstantin Belousov from comment #15)

Do 'panic data' means ddb messages on panic? A Message on panic is:

panic: Bad link elm 0xfffff8003be59b00 next->prev != elm

I reproduces the problem with same commands (start watchman and repeatedly clone/delete git repos), but this time I got slightly different traces. The message above got from new trace, and there's ddb output corresponding to the panic message.

https://gist.github.com/anonymous/bae4d503e20c5ab00ff1e513c523c9f5

Please let me know if i'm misunderstanding something.
Comment 17 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-19 12:01:18 UTC
(In reply to yjh0502 from comment #16)
You did everything right, and this is a useful data there.

Is it true that with the INVARIANTS/DIAGNOSTIC options added, your hangs transformed into the panic ?

Do you have the core from this panic ?  I want to see the printout of *kn from the knote_drop() frame in kgdb.

Also, could you provide the binary and recipe to reproduce the panic ?  It might be simpler to proceed if I can re-create this locally.
Comment 18 yjh0502 2016-12-19 14:54:11 UTC
(In reply to Konstantin Belousov from comment #17)

Yes, I believe that both hangs/panic is caused by the same problem, as both symptom could be reproduced by same procedure.

I made a simple script to reproduce the problem on both GENERIC/debug kernels, 10.3 and 11.0. I reproduced and observed the problem by
 - running 'top' on console and start following script via ssh, or
 - running 'top' and following script on 'tmux'

I can reproduce the problem almost instantly after running the script. If you cannot reproduce the problem, try to re-run the script several times after cleaning up processes using 'pkill watchman xargs'.

'watchman' command can be installed with 'pkg'.

https://gist.github.com/yjh0502/1355939dc89a055029d1161e5501a50a

#!/usr/bin/env bash

mkdir -p foo bar
seq -w 0 100 | xargs -n1 -I{} touch foo/{}.c

echo '["subscribe","./","mysub",{"fields":["name"],"expression":["allof",["type","f"],["not","empty"],["suffix","c"]]}]' | watchman -p -j --server-encoding=json > /dev/null &
while true; do find bar/ -type f | xargs -n1 -P5 -I'{}' mv '{}' foo; done &
while true; do find foo/ -type f | xargs -n1 -P5 -I'{}' mv '{}' bar; done &

wait
Comment 19 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-19 20:56:00 UTC
Created attachment 178123 [details]
Do not clear KN_INFLUX when note is in flux during scan.

The patch fixed the problem in the testing environment.  Please check that your original problem is gone as well.
Comment 20 commit-hook freebsd_committer freebsd_triage 2016-12-19 22:18:59 UTC
A commit references this bug:

Author: kib
Date: Mon Dec 19 22:18:36 UTC 2016
New revision: 310302
URL: https://svnweb.freebsd.org/changeset/base/310302

Log:
  Do not clear KN_INFLUX when not owning influx state.

  For notes in KN_INFLUX|KN_SCAN state, the influx bit is set by a
  parallel scan.  When knote() reports event for the vnode filters,
  which require kqueue unlocked, it unconditionally sets and then clears
  influx to keep note around kqueue unlock.  There, do not clear influx
  flag if a scan set it, since we do not own it, instead we prevent scan
  from executing by holding knlist lock.

  The knote_fork() function has somewhat similar problem, it might set
  KN_INFLUX for scanned note, drop kqueue and list locks, and then clear
  the flag after relock.  A solution there would be different enough, as
  well as the test program, so close the reported issue first.

  Reported and test case provided by:	yjh0502@gmail.com
  PR:	214923
  Tested by:	pho
  Sponsored by:	The FreeBSD Foundation
  MFC after:	1 week

Changes:
  head/sys/kern/kern_event.c
Comment 21 yjh0502 2016-12-20 05:00:33 UTC
(In reply to Konstantin Belousov from comment #19)

The problem seems to be solved, thanks!
Comment 22 Konstantin Belousov freebsd_committer freebsd_triage 2016-12-20 09:45:31 UTC
(In reply to yjh0502 from comment #21)
More specifically, I mean, did you tested your set up with the patched non-debugging kernel ?
Comment 23 yjh0502 2016-12-21 09:40:33 UTC
(In reply to Konstantin Belousov from comment #22)

Sorry for my late reply. I am stress-testing GENERIC kernel (without debugging options) with your patch, with my workload (which is quite similar with the testcase) and so far I can't reproduce the problem.

Thanks.
Comment 24 commit-hook freebsd_committer freebsd_triage 2016-12-21 10:00:32 UTC
A commit references this bug:

Author: pho
Date: Wed Dec 21 09:59:55 UTC 2016
New revision: 310362
URL: https://svnweb.freebsd.org/changeset/base/310362

Log:
  Added a regression test.

  PR:		214923
  Submitted by:	Jihyun Yu <yjh0502@gmail.com>
  Sponsored by:	Dell EMC Isilon

Changes:
  user/pho/stress2/misc/watchman.sh
Comment 25 commit-hook freebsd_committer freebsd_triage 2016-12-26 09:53:06 UTC
A commit references this bug:

Author: kib
Date: Mon Dec 26 09:52:19 UTC 2016
New revision: 310578
URL: https://svnweb.freebsd.org/changeset/base/310578

Log:
  MFC r310302:
  Do not clear KN_INFLUX when not owning influx state.

  PR:	214923

Changes:
_U  stable/11/
  stable/11/sys/kern/kern_event.c
Comment 26 commit-hook freebsd_committer freebsd_triage 2016-12-26 10:14:26 UTC
A commit references this bug:

Author: kib
Date: Mon Dec 26 10:13:53 UTC 2016
New revision: 310584
URL: https://svnweb.freebsd.org/changeset/base/310584

Log:
  MFC r310302:
  Do not clear KN_INFLUX when not owning influx state.

  PR:	214923

Changes:
_U  stable/10/
  stable/10/sys/kern/kern_event.c
Comment 27 Emilio Cobos Álvarez 2017-02-13 16:29:36 UTC
Thanks for fixing this, I just ran into it today in 11 and works great after an update :)