Using the latest (non buggy Ryzens), compilation of certain ports leads to a hang. I have observed this on 11.1 as well as CURRENT as of r328611. An easy way to trigger the deadlock / hang is the compile net/samba47. Others see this bug too (https://lists.freebsd.org/pipermail/freebsd-stable/2018-January/088296.html) The hang will happen at random places in the compile. ges/vfs_aio_linux.8.xml true --nonet -o default/docs-xml/manpages/vfs_aio_linux.8 /usr/ports/net/samba47/work/samba-4.7.4/docs-xml/xslt/man.xsl default/docs-xml/manpages/vfs_aio_linux.8.xml [3749/3804] Generating manpages/vfs_audit.8 [3750/3804] Generating manpages/vfs_btrfs.8 [3751/3804] Generating manpages/vfs_cacheprime.8 runner XML_CATALOG_FILES="file:///etc/xml/catalog file:///usr/local/share/xml/catalog file:///usr/ports/net/samba47/work/samba-4.7.4/bin/default/docs-xml/build/catalog.xml" export XML_CATALOG_FILES true --xinclude --stringparam noreference 0 -o default/docs-xml/manpages/vfs_audit.8.xml --nonet /usr/ports/net/samba47/work/samba-4.7.4/docs-xml/xslt/expand-sambadoc.xsl /usr/ports/net/samba47/work/samba-4.7.4/docs-xml/manpages/vfs_audit.8.xml true --nonet -o default/docs-xml/manpages/vfs_audit.8 /usr/ports/net/samba47/work/samba-4.7.4/docs-xml/xslt/man.xsl default/docs-xml/manpages/vfs_audit.8.xml load: 0.04 cmd: python2.7 64178 [usem] 2644.74r 31.01u 70.09s 0% 142756k make: Working in: /usr/ports/net/samba47 make[1]: Working in: /usr/ports/net/samba47 Occasionally, by loading up the CPU with other tasks, it can sometimes complete. In the hang, procstat shows root@amdtestr12:/home/mdtancsa # procstat -t 64178 PID TID COMM TDNAME CPU PRI STATE WCHAN 64178 100793 python2.7 - -1 152 sleep usem 64178 101077 python2.7 - -1 121 sleep usem 64178 101078 python2.7 - -1 121 sleep usem 64178 101079 python2.7 - -1 121 sleep usem 64178 101080 python2.7 - -1 122 sleep usem 64178 101081 python2.7 - -1 121 sleep usem 64178 101082 python2.7 - -1 121 sleep usem 64178 101083 python2.7 - -1 121 sleep umtxn 64178 101084 python2.7 - -1 121 sleep usem 64178 101085 python2.7 - -1 122 sleep usem 64178 101086 python2.7 - -1 121 sleep usem 64178 101087 python2.7 - -1 121 sleep usem 64178 101088 python2.7 - -1 121 sleep usem root@amdtestr12:/home/mdtancsa # procstat -k 64178 PID TID COMM TDNAME KSTACK 64178 100793 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101077 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101078 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101079 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101080 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101081 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101082 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101083 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 64178 101084 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101085 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101086 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101087 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 64178 101088 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common root@amdtestr12:/home/mdtancsa # procstat -kk 64178 PID TID COMM TDNAME KSTACK 64178 100793 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101077 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101078 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101079 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101080 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101081 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101082 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101083 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_lock_umutex+0x6e3 __umtx_op_wait_umutex+0x48 amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101084 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101085 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101086 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101087 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc 64178 101088 python2.7 - mi_switch+0x18c sleepq_switch+0x10d sleepq_catch_signals+0x300 sleepq_wait_sig+0xf _sleep+0x359 umtxq_sleep+0x163 do_sem2_wait+0x426 __umtx_op_sem2_wait+0x4b amd64_syscall+0x79b fast_syscall_common+0xfc FreeBSD amdtestr12.sentex.ca 12.0-CURRENT FreeBSD 12.0-CURRENT #0 r328611 CPU: AMD Ryzen 5 1600X Six-Core Processor (3593.36-MHz K8-class CPU) Origin="AuthenticAMD" Id=0x800f11 Family=0x17 Model=0x1 Stepping=1 Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> Features2=0x7ed8320b<SSE3,PCLMULQDQ,MON,SSSE3,FMA,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND> AMD Features=0x2e500800<SYSCALL,NX,MMX+,FFXSR,Page1GB,RDTSCP,LM> AMD Features2=0x35c233ff<LAHF,CMP,SVM,ExtAPIC,CR8,ABM,SSE4A,MAS,Prefetch,OSVW,SKINIT,WDT,TCE,Topology,PCXC,PNXC,DBE,PL2I,MWAITX> Structured Extended Features=0x209c01a9<FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA> XSAVE Features=0xf<XSAVEOPT,XSAVEC,XINUSE,XSAVES> AMD Extended Feature Extensions ID EBX=0x7<CLZERO,IRPerf,XSaveErPtr> SVM: (disabled in BIOS) NP,NRIP,VClean,AFlush,DAssist,NAsids=32768 TSC: P-state invariant, performance statistics
Tried also with kern.eventtimer.periodic=1 but it made no difference that I could tell. Process hung as usual
Tried with # sysctl -a kern.sched.name kern.sched.name: 4BSD but not obvious difference.
I have reproduced this on my Threadripper, both with and without SMT disabled in BIOS.
I tried pretty well every combo on my Ryzen board as well-- SMT on / off, No HT, cores only, etc with no change.
(In reply to Ed Maste from comment #3) There is a report of hangs during builds of lang/ruby24 on RPI3's under head -r327788 . See: https://lists.freebsd.org/pipermail/freebsd-arm/2018-February/017445.html I've no evidence for sameness vs. independent-status for the root cause(s).
(In reply to Ed Maste from comment #3) What's nice about this problem is it is debuggable. The build hangs, but the system doesn't. So, it should be possible to examine the system state, maybe to trace some things before the problem strikes. I mean 'nice' as in contrast to other (mostly earlier) problems on Ryzen where the hardware would just shut off and you are left with no clues.
Could you please re-do your test using kernel with following options added? Be ready for panic, though and prepare to collect crashdump: options KDB options KDB_UNATTENDED options KDB_TRACE options DDB options GDB options INVARIANTS options INVARIANT_SUPPORT options WITNESS options WITNESS_SKIPSPIN
I got a different example: Threadripper 1950X context, head -r327485 doing a adm64 -> aarch64 cross build of ports: hung up in uwait. The context is also: Running under Hyper-V (from a Windows 10 Pro context). The context has a non-debug kernel (my usual). poudriere-devel's jail was built with: -a arm64.aarch64 -x in use. Part of the reason for reporting: it indicates that the problem is not all that recent relative to FreeBSD head versions. Also the under-Hyper-V context is likely unusual and so might give someone knowledgable some context based on its distinctions not preventing the issue. This is the first such hangup that I've seen and I've been running -r327485 for some time. I've done several updates to port builds for native (amd64), armv7, and aarch64 under this version of head. Top details for processes associated with the poudriere-devel run: PID USERNAME THR PRI NICE SIZE RES SWAP STATE C TIME CPU COMMAND 73211 root 3 52 0 270M 39648K 0K uwait 23 0:00 0.00% [qemu-aarch64-static] 73210 root 2 52 0 243M 31024K 0K wait 16 0:00 0.00% /usr/local/bin/qemu-aarch64-static clang++ -o ipc_sysv main.o -L/usr/local/lib 73088 root 1 20 0 9984K 1280K 0K wait 5 0:00 0.00% make 64988 root 2 20 0 257M 38720K 0K piperd 1 0:07 0.00% [qemu-aarch64-static] 64487 root 1 52 0 10544K 1848K 0K wait 2 0:00 0.00% [sh] 64481 root 1 52 0 10192K 1760K 0K wait 27 0:00 0.00% [sh] 64432 root 1 20 0 10380K 1804K 0K wait 19 0:00 0.00% /usr/bin/make -C /usr/ports/x11-toolkits/qt5-gui configure 64431 root 1 44 0 12824K 3760K 0K wait 18 0:00 0.00% sh: poudriere[FBSDFSSDjailCortexA53-default][02]: build_pkg (qt5-gui-5.9.4_1) (sh) 50049 root 1 20 0 12824K 3768K 0K select 21 0:00 0.00% sh: poudriere[FBSDFSSDjailCortexA53-default][02]: build_pkg (qt5-gui-5.9.4_1) (sh) 12174 root 1 42 0 12824K 3756K 0K piperd 21 0:01 0.00% sh -e /usr/local/share/poudriere/bulk.sh -jFBSDFSSDjailCortexA53 -w -f /root/cortexA53-origins.txt 94665 root 1 52 0 12824K 3712K 0K nanslp 20 0:27 0.77% sh -e /usr/local/share/poudriere/bulk.sh -jFBSDFSSDjailCortexA53 -w -f /root/cortexA53-origins.txt 94448 root 1 52 0 12824K 3752K 0K select 12 0:03 0.00% sh -e /usr/local/share/poudriere/bulk.sh -jFBSDFSSDjailCortexA53 -w -f /root/cortexA53-origins.txt
(In reply to Eugene Grosbein from comment #7) Hi, apart from KDB_UNATTENDED these options are all part of a standard GENERIC kernel on HEAD, no ? I added that as well as BREAK_TO_DEBUGGER. The kernel does not (nor has it ever) panic. I start the compile, and the compile gets stuck during the build. Forcing it to go to the debugger telnet> send break KDB: enter: Break to debugger [ thread pid 11 tid 100003 ] Stopped at kdb_break+0x4d: movq $0,kdb_why db> ps pid ppid pgrp uid state wmesg wchan cmd 5814 5813 5814 0 S+ ttyin 0xfffff8000510dcb0 csh 5813 5807 5813 1001 S+ wait 0xfffff8018bbb5538 su 5807 5806 5807 1001 Ss+ pause 0xfffff800165350a8 csh 5806 5804 5804 1001 S select 0xfffff80006746d40 sshd 5804 985 5804 0 Ss select 0xfffff800063222c0 sshd 5772 5771 1076 0 S+ (threaded) python2.7 100821 S usem 0xfffff80016833b80 python2.7 100852 S umtxn 0xfffff80006864800 python2.7 100853 S umtxn 0xfffff80003debc80 python2.7 100854 S umtxn 0xfffff80006864600 python2.7 100855 S umtxn 0xfffff80006865280 python2.7 100856 S umtxn 0xfffff80006864400 python2.7 100857 S umtxn 0xfffff80006864300 python2.7 100858 S umtxn 0xfffff80006865380 python2.7 100859 S umtxn 0xfffff80003debd80 python2.7 100860 S umtxn 0xfffff80006864200 python2.7 100861 S umtxn 0xfffff80006864500 python2.7 100862 S umtxn 0xfffff80006865180 python2.7 100863 S umtxn 0xfffff80006864700 python2.7 5771 1101 1076 0 S+ wait 0xfffff80016535a70 sh 1101 1076 1076 0 S+ wait 0xfffff800064a0000 make 1076 1070 1076 0 S+ wait 0xfffff80016e3b000 make 1070 1069 1070 0 S+ pause 0xfffff800039025e0 csh 1069 1066 1069 1001 S+ wait 0xfffff80016536a70 su 1066 1065 1066 1001 Ss+ pause 0xfffff80016537b18 csh 1065 1063 1063 1001 S select 0xfffff800066ea7c0 sshd 1063 985 1063 0 Ss select 0xfffff800039d6f40 sshd 1062 1 1062 0 Ss+ ttyin 0xfffff8000510e0b0 getty 1061 1 1061 0 Ss+ ttyin 0xfffff800066c0cb0 getty 1060 1 1060 0 Ss+ ttyin 0xfffff800066c10b0 getty 1059 1 1059 0 Ss+ ttyin 0xfffff800066c14b0 getty 1058 1 1058 0 Ss+ ttyin 0xfffff800066c18b0 getty 1057 1 1057 0 Ss+ ttyin 0xfffff800066c1cb0 getty 1056 1 1056 0 Ss+ ttyin 0xfffff800066c20b0 getty 1055 1 1055 0 Ss+ ttyin 0xfffff800066c24b0 getty 1054 1 1054 0 Ss+ ttyin 0xfffff800066c28b0 getty 995 1 995 0 Ss nanslp 0xffffffff81ceb8c0 cron 991 1 991 25 Ss pause 0xfffff80016b36b18 sendmail 988 1 988 0 Ss select 0xfffff800033b0bc0 sendmail 985 1 985 0 Ss select 0xfffff80003da9840 sshd 860 1 860 0 Ss select 0xfffff800039d7040 syslogd 625 1 625 0 Ss select 0xfffff80003da98c0 devd 624 1 624 65 Ss select 0xfffff80006738a40 dhclient 580 1 580 0 Ss select 0xfffff80003da9940 dhclient 577 1 577 0 Ss select 0xfffff800066ea840 dhclient 141 1 141 0 Ss pause 0xfffff80016df95e0 adjkerntz 33 0 0 0 DL syncer 0xffffffff81e90e40 [syncer] 32 0 0 0 DL vlruwt 0xfffff800066f6538 [vnlru] 31 0 0 0 DL psleep 0xffffffff81e90168 [bufdaemon] 30 0 0 0 DL - 0xffffffff81e9038c [bufspacedaemon] 29 0 0 0 DL psleep 0xffffffff81e9c8e8 [vmdaemon] 28 0 0 0 DL (threaded) [pagedaemon] 100177 D psleep 0xffffffff81e98cec [pagedaemon] 100179 D launds 0xffffffff81e98cf8 [laundry: dom0] 100181 D umarcl 0xffffffff80db7e90 [uma] 27 0 0 0 DL - 0xffffffff81bbe8a8 [rand_harvestq] 26 0 0 0 DL waiting 0xffffffff81f0a240 [sctp_iterator] 25 0 0 0 DL - 0xffffffff81e8fb64 [soaiod4] 24 0 0 0 DL - 0xffffffff81e8fb64 [soaiod3] db> tr 5772 Tracing pid 5772 tid 100821 td 0xfffff8015cad1000 sched_switch() at sched_switch+0x4f5/frame 0xfffffe00a3dc06b0 mi_switch() at mi_switch+0x18c/frame 0xfffffe00a3dc06e0 sleepq_switch() at sleepq_switch+0x10d/frame 0xfffffe00a3dc0720 sleepq_catch_signals() at sleepq_catch_signals+0x300/frame 0xfffffe00a3dc0780 sleepq_wait_sig() at sleepq_wait_sig+0xf/frame 0xfffffe00a3dc07b0 _sleep() at _sleep+0x359/frame 0xfffffe00a3dc0860 umtxq_sleep() at umtxq_sleep+0x163/frame 0xfffffe00a3dc08c0 do_sem2_wait() at do_sem2_wait+0x426/frame 0xfffffe00a3dc0930 __umtx_op_sem2_wait() at __umtx_op_sem2_wait+0x4b/frame 0xfffffe00a3dc0980 amd64_syscall() at amd64_syscall+0x79b/frame 0xfffffe00a3dc0ab0 fast_syscall_common() at fast_syscall_common+0x101/frame 0x7fffffffb570 db> show lockchain 5772 thread 100821 (pid 5772, python2.7) blocked on lockmgr (null)EXCL db> db> show lockchain 100821 thread 100821 (pid 5772, python2.7) blocked on lockmgr (null)EXCL db> show lockchain 100852 thread 100852 (pid 5772, python2.7) blocked on lockmgr EXCL db> show lockchain 100853 thread 100853 (pid 5772, python2.7) blocked on lockmgr QEXCL db> show lockchain 100855 thread 100855 (pid 5772, python2.7) blocked on lockmgr EXCL db> show lockchain 100856 thread 100856 (pid 5772, python2.7) blocked on lockmgr EXCL db> db> show lock 0xfffff80016833b80 class: lockmgr name: (null) state:KDB: reentering KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0077b5a170 kdb_reenter() at kdb_reenter+0x2f/frame 0xfffffe0077b5a180 trap() at trap+0x5c/frame 0xfffffe0077b5a290 calltrap() at calltrap+0x8/frame 0xfffffe0077b5a290 --- trap 0xc, rip = 0xffffffff80a746c6, rsp = 0xfffffe0077b5a360, rbp = 0xfffffe0077b5a370 --- db_show_lockmgr() at db_show_lockmgr+0x56/frame 0xfffffe0077b5a370 db_command() at db_command+0x279/frame 0xfffffe0077b5a440 db_command_loop() at db_command_loop+0x64/frame 0xfffffe0077b5a450 db_trap() at db_trap+0xef/frame 0xfffffe0077b5a4e0 kdb_trap() at kdb_trap+0x1c3/frame 0xfffffe0077b5a550 trap() at trap+0x328/frame 0xfffffe0077b5a660 calltrap() at calltrap+0x8/frame 0xfffffe0077b5a660 --- trap 0x3, rip = 0xffffffff80ae443d, rsp = 0xfffffe0077b5a730, rbp = 0xfffffe0077b5a740 --- kdb_break() at kdb_break+0x4d/frame 0xfffffe0077b5a740 uart_intr() at uart_intr+0x105/frame 0xfffffe0077b5a780 intr_event_handle() at intr_event_handle+0xbc/frame 0xfffffe0077b5a7d0 intr_execute_handlers() at intr_execute_handlers+0x58/frame 0xfffffe0077b5a800 lapic_handle_intr() at lapic_handle_intr+0x6d/frame 0xfffffe0077b5a840 apic_isr1_u() at apic_isr1_u+0xc1/frame 0xfffffe0077b5a910 acpi_cpu_idle() at acpi_cpu_idle+0x2ee/frame 0xfffffe0077b5a960 cpu_idle_acpi() at cpu_idle_acpi+0x3f/frame 0xfffffe0077b5a980 cpu_idle() at cpu_idle+0x8f/frame 0xfffffe0077b5a9a0 sched_idletd() at sched_idletd+0x40a/frame 0xfffffe0077b5aa70 fork_exit() at fork_exit+0x84/frame 0xfffffe0077b5aab0 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0077b5aab0 --- trap 0, rip = 0, rsp = 0, rbp = 0 --- db> show lock 0xfffff80006864800 class: lockmgr name: state:KDB: reentering KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0077b5a170 kdb_reenter() at kdb_reenter+0x2f/frame 0xfffffe0077b5a180 trap() at trap+0x5c/frame 0xfffffe0077b5a290 calltrap() at calltrap+0x8/frame 0xfffffe0077b5a290 --- trap 0xc, rip = 0xffffffff80a746c6, rsp = 0xfffffe0077b5a360, rbp = 0xfffffe0077b5a370 --- db_show_lockmgr() at db_show_lockmgr+0x56/frame 0xfffffe0077b5a370 db_command() at db_command+0x279/frame 0xfffffe0077b5a440 db_command_loop() at db_command_loop+0x64/frame 0xfffffe0077b5a450 db_trap() at db_trap+0xef/frame 0xfffffe0077b5a4e0 kdb_trap() at kdb_trap+0x1c3/frame 0xfffffe0077b5a550 trap() at trap+0x328/frame 0xfffffe0077b5a660 calltrap() at calltrap+0x8/frame 0xfffffe0077b5a660 --- trap 0x3, rip = 0xffffffff80ae443d, rsp = 0xfffffe0077b5a730, rbp = 0xfffffe0077b5a740 --- kdb_break() at kdb_break+0x4d/frame 0xfffffe0077b5a740 uart_intr() at uart_intr+0x105/frame 0xfffffe0077b5a780 intr_event_handle() at intr_event_handle+0xbc/frame 0xfffffe0077b5a7d0 intr_execute_handlers() at intr_execute_handlers+0x58/frame 0xfffffe0077b5a800 lapic_handle_intr() at lapic_handle_intr+0x6d/frame 0xfffffe0077b5a840 apic_isr1_u() at apic_isr1_u+0xc1/frame 0xfffffe0077b5a910 acpi_cpu_idle() at acpi_cpu_idle+0x2ee/frame 0xfffffe0077b5a960 cpu_idle_acpi() at cpu_idle_acpi+0x3f/frame 0xfffffe0077b5a980 cpu_idle() at cpu_idle+0x8f/frame 0xfffffe0077b5a9a0 sched_idletd() at sched_idletd+0x40a/frame 0xfffffe0077b5aa70 fork_exit() at fork_exit+0x84/frame 0xfffffe0077b5aab0 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0077b5aab0 --- trap 0, rip = 0, rsp = 0, rbp = 0 --- db> db> continue root@amdtestr12:/usr/src/sys/amd64/conf # procstat -k 5772 PID TID COMM TDNAME KSTACK 5772 100821 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 5772 100852 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100853 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100854 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100855 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100856 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100857 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100858 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100859 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100860 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100861 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100862 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 5772 100863 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common root@amdtestr12:/usr/src/sys/amd64/conf #
(In reply to mike from comment #9) > Hi, apart from > KDB_UNATTENDED > these options are all part of a standard GENERIC kernel on HEAD, no ? They are, yes. I suspect we're going to add debugging (e.g. KTR) to the umtx kernel routines.
(In reply to mike from comment #9) These threads locked in "umtxn" state remind me of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=186114 that revealed long-standing bugs in our libc being not cancel-safe for multi-threaded code (fixed now). The problem was a race and could be reproduced in very specific load pattern leading mutex lock leak and following live-lock. I suspect we may be facing similar problem here that is not hardware-specific but is more probably to hit with specific hardware due to different timings.
Adding kib@FreeBSD.org that was a hero of similar looking PR 186114. Konstantin had some debugging patches to deal with "umtxn" problem that time - may be, he can help us this time too.
(In reply to Eugene Grosbein from comment #11) Re: timing issues, I did try it on a number of different Intel CPUs ( eg Xeon(R) CPU E5-1650) of various core densities and speeds and was not able to reproduce the error. However, a number of Ryzen and Threadripper CPUs of different speeds and core counts were able to reproduce the error. I also modified the BIOS settings on my unit to turn off HT and change core counts and it did not impact the compile hang issue.
(In reply to Eugene Grosbein from comment #12) Compile everything with debugging symbols, i.e. rtld/libc/libthr, and the application (python interpreter). Than take the backtrace, and print out the state of the libthr locking objects whch block the threads. This might be well the python issue and not the FreeBSD' or FreeBSD+Zen problem. You need to investigate the app first.
(In reply to Konstantin Belousov from comment #14) Thanks. I personally do not have such problem nor AMD hardware to reproduce that but hope Mike will benefit from this hint.
(In reply to Eugene Grosbein from comment #15) I built world with /etc/make.conf KERNCONF=DEBUG DEBUG_FLAGS=-g WITH_DEBUG=YES and then rebuilt python. I am not sure what needs to be done in the debugger however ? Note, this box is also in the netperf cluster, zoo.freebsd.org. port 2010 off pm25 (.191)
Created attachment 190365 [details] gdb output of stuck python process
(In reply to mike from comment #17) Rebuild libthr with flags "-g -O0" and reinstall it. I need: backtrace of all threads in the process. Plain "bt" for each thread is enough, "bt all" gives too much data for now. Do not abandon debugger session after that, I will need further data which I can identify only looking at the backtraces above.
(In reply to Konstantin Belousov from comment #18) Interesting. With -g -O0, I dont get the build hangs. So far in a loop, the port has built 9 times. Normally, it hangs 100% of the time Could be a particular optimization thats breaks things ?
(In reply to mike from comment #19) It took a few more runs, but after 10 builds, the build hung as usual. attaching the output
Created attachment 190376 [details] bt and listing of threads Note, this box is attached off zoo if you want to log in and look around 192.168.5.191
(In reply to mike from comment #21) It looks like after: (gdb) thread 11 [Switching to thread 11 (LWP 101275 of process 38415)] #0 _umtx_op_err () at /usr/src/lib/libthr/arch/amd64/amd64/_umtx_op_err.S:37 you could have then typed bt as a command to get what you were trying for with "bt thread 11".
(In reply to Mark Millard from comment #22) Hi, I installed gdb from the ports as it doesnt seem to be installed by default in HEAD anymore ? bt thread 11 doesnt seem to work in 8.x (gdb) bt thread 11 A syntax error in expression, near `thread 11'. (gdb)
(In reply to mike from comment #23) Try: (gdb) thread 11 [Switching to thread 11 (LWP 101275 of process 38415)] #0 _umtx_op_err () at /usr/src/lib/libthr/arch/amd64/amd64/_umtx_op_err.S:37 (gdb) bt . . . In other words: first switch to thread 11 then do bt without mentioning thread 11.
It seems that gdb can do 'thread apply all bt'. At least my copy of gdb 8.1 can.
Created attachment 190385 [details] thread apply all bt info thread at the end
(In reply to mike from comment #26) Please switch to the Thread 2 (LWP 101302) and do: - on frame 2, "print *curthread" and "print *m" - on frame 4, "print *mutex" - on frame 5, "print *fp"
(In reply to Konstantin Belousov from comment #27) (gdb) thread 2 [Switching to thread 2 (LWP 101302 of process 48133)] #0 _umtx_op_err () at /usr/src/lib/libthr/arch/amd64/amd64/_umtx_op_err.S:37 37 RSYSCALL_ERR(_umtx_op) (gdb) frame 2 #2 0x000000080065bf96 in mutex_lock_sleep (curthread=0x80a5cd300, m=0x8008f72c0, abstime=0x0) at /usr/src/lib/libthr/thread/thr_mutex.c:693 693 ret = __thr_umutex_lock(&m->m_lock, id); (gdb) print *m $1 = {m_lock = {m_owner = -2147382339, m_flags = 0, m_ceilings = {0, 0}, m_rb_lnk = 0, m_spare = {0, 0}}, m_flags = 1, m_count = 0, m_spinloops = 0, m_yieldloops = 0, m_ps = 0, m_qe = { tqe_next = 0x0, tqe_prev = 0x80a5cf7a0}, m_pqe = {tqe_next = 0x0, tqe_prev = 0x80a5cf7b0}, m_rb_prev = 0x0} (gdb) print *curthread $2 = {tid = 101302, lock = {m_owner = 0, m_flags = 0, m_ceilings = {0, 0}, m_rb_lnk = 0, m_spare = {0, 0}}, cycle = 0, locklevel = 0, critical_count = 0, sigblock = 0, tle = { tqe_next = 0x8008f9000, tqe_prev = 0x80a5cd838}, gcle = {tqe_next = 0x0, tqe_prev = 0x0}, hle = {le_next = 0x0, le_prev = 0x800876598}, wle = {tqe_next = 0x0, tqe_prev = 0x0}, refcount = 1, start_routine = 0x800ba88f0 <t_bootstrap>, arg = 0x80b9d9720, attr = {sched_policy = 2, sched_inherit = 4, prio = 0, suspend = 0, flags = 2, stackaddr_attr = 0x7fffdfbfd000, stacksize_attr = 4194304, guardsize_attr = 4096, cpuset = 0x0, cpusetsize = 0}, cancel_enable = 1, cancel_pending = 0, cancel_point = 0, no_cancel = 0, cancel_async = 0, cancelling = 0, sigmask = {__bits = {0, 0, 0, 0}}, unblock_sigcancel = 0, in_sigsuspend = 0, deferred_siginfo = {si_signo = 0, si_errno = 0, si_code = 0, si_pid = 0, si_uid = 0, si_status = 0, si_addr = 0x0, si_value = {sival_int = 0, sival_ptr = 0x0, sigval_int = 0, sigval_ptr = 0x0}, _reason = {_fault = {_trapno = 0}, _timer = {_timerid = 0, _overrun = 0}, _mesgq = {_mqd = 0}, _poll = { _band = 0}, __spare__ = {__spare1__ = 0, __spare2__ = {0, 0, 0, 0, 0, 0, 0}}}}, deferred_sigmask = {__bits = {0, 0, 0, 0}}, deferred_sigact = {__sigaction_u = {__sa_handler = 0x0, __sa_sigaction = 0x0}, sa_flags = 0, sa_mask = {__bits = {0, 0, 0, 0}}}, deferred_run = 0, force_exit = 0, state = PS_RUNNING, error = 0, joiner = 0x0, flags = 8, tlflags = 2, mq = {{ tqh_first = 0x0, tqh_last = 0x80a5cd4a0}, {tqh_first = 0x0, tqh_last = 0x80a5cd4b0}, {tqh_first = 0x0, tqh_last = 0x80a5cd4c0}, {tqh_first = 0x0, tqh_last = 0x80a5cd4d0}, {tqh_first = 0x0, tqh_last = 0x80a5cd4e0}, {tqh_first = 0x0, tqh_last = 0x80a5cd4f0}}, ret = 0x0, specific = 0x0, specific_data_count = 0, rdlock_count = 0, rtld_bits = 0, tcb = 0x8008de8b0, cleanup = 0x0, ex = {exception_class = 0, exception_cleanup = 0x0, private_1 = 0, private_2 = 0}, unwind_stackend = 0x7fffdfffd000, unwind_disabled = 0, magic = 3499860245, report_events = 0, event_mask = 0, event_buf = {event = TD_EVENT_NONE, th_p = 0, data = 0}, wchan = 0x0, mutex_obj = 0x0, will_sleep = 0, nwaiter_defer = 0, robust_inited = 0, robust_list = 0, priv_robust_list = 0, inact_mtx = 0, defer_waiters = {0x0 <repeats 50 times>}, wake_addr = 0x805fbb000, sleepqueue = 0x80676f740} (gdb) frame 4 #4 0x000000080065965e in __pthread_mutex_lock (mutex=0x8014829c8) at /usr/src/lib/libthr/thread/thr_mutex.c:746 746 ret = mutex_lock_common(m, NULL, false, false); (gdb) print *mutex $3 = (pthread_mutex_t) 0x8008f72c0 (gdb) frame 5 #5 0x0000000801245667 in _flockfile (fp=0x801482930) at /usr/src/lib/libc/stdio/_flock_stub.c:75 75 _pthread_mutex_lock(&fp->_fl_mutex); (gdb) print *fp $4 = {_p = 0x0, _r = 0, _w = 0, _flags = 4, _file = 3, _bf = {_base = 0x0, _size = 0}, _lbfsize = 0, _cookie = 0x801482930, _close = 0x8012453c0 <__sclose>, _read = 0x801245370 <__sread>, _seek = 0x8012453b0 <__sseek>, _write = 0x801245390 <__swrite>, _ub = {_base = 0x0, _size = 0}, _up = 0x0, _ur = 0, _ubuf = "\000\000", _nbuf = "", _lb = {_base = 0x0, _size = 0}, _blksize = 59904, _offset = 2279, _fl_mutex = 0x8008f72c0, _fl_owner = 0x80a5cf600, _fl_count = 1, _orientation = 0, _mbstate = {__mbstate8 = '\000' <repeats 127 times>, _mbstateL = 0}, _flags2 = 0} (gdb)
(In reply to mike from comment #28) Now please switch to the thread 9, and from the frame 6, do "print *f" and "print *(f->f_fp)".
(In reply to Konstantin Belousov from comment #29) (gdb) thread 9 [Switching to thread 9 (LWP 101309 of process 48133)] #0 _umtx_op () at _umtx_op.S:3 3 RSYSCALL(_umtx_op) (gdb) frame 6 #6 0x0000000800a85aa5 in file_close (f=0x80fd91b70) at Objects/fileobject.c:669 669 PyObject *sts = close_the_file(f); (gdb) print *f $5 = {ob_refcnt = 2, ob_type = 0x800e1baf0 <PyFile_Type>, f_fp = 0x0, f_name = '/usr/ports/net/samba47/work/samba-4.7.4/bin/default/docs-xml/manpages/net.8', f_mode = 'rb', f_close = 0x801237880 <fclose>, f_softspace = 0, f_binary = 1, f_buf = 0x0, f_bufend = 0x0, f_bufptr = 0x0, f_setbuf = 0x0, f_univ_newline = 0, f_newlinetypes = 0, f_skipnextlf = 0, f_encoding = None, f_errors = None, weakreflist = 0x0, unlocked_count = 0, readable = 1, writable = 0} (gdb) print *(f->f_fp) Cannot access memory at address 0x0 (gdb) print *(f->f_fp) Cannot access memory at address 0x0 (gdb)
(In reply to mike from comment #30) Not useful, indeed. Please from the same thread 9, change to the frame 7 and do "print local_fp", "print *local_fp".
(In reply to Konstantin Belousov from comment #31) Its not there ? (gdb) frame 7 #7 0x0000000800b55b5b in call_function (pp_stack=0x7fffde3f3148, oparg=0) at Python/ceval.c:4341 4341 C_TRACE(x, (*meth)(self,NULL)); (gdb) list 4336 PCALL(PCALL_CFUNCTION); 4337 if (flags & (METH_NOARGS | METH_O)) { 4338 PyCFunction meth = PyCFunction_GET_FUNCTION(func); 4339 PyObject *self = PyCFunction_GET_SELF(func); 4340 if (flags & METH_NOARGS && na == 0) { 4341 C_TRACE(x, (*meth)(self,NULL)); 4342 } 4343 else if (flags & METH_O && na == 1) { 4344 PyObject *arg = EXT_POP(*pp_stack); 4345 C_TRACE(x, (*meth)(self,arg)); (gdb) print local_fp No symbol "local_fp" in current context. (gdb) print *local_fp No symbol "local_fp" in current context. (gdb)
(In reply to mike from comment #32) Perhaps you did not switched to the thread 9 before select frame ?
(In reply to Konstantin Belousov from comment #33) gdb attach 48133 GNU gdb (GDB) 8.0.1 [GDB v8.0.1 for FreeBSD] Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-portbld-freebsd12.0". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... attach: No such file or directory. Attaching to process 48133 Reading symbols from /usr/local/bin/python2.7...done. [New LWP 101302 of process 48133] [New LWP 101303 of process 48133] [New LWP 101304 of process 48133] [New LWP 101305 of process 48133] [New LWP 101306 of process 48133] [New LWP 101307 of process 48133] [New LWP 101308 of process 48133] [New LWP 101309 of process 48133] [New LWP 101310 of process 48133] [New LWP 101311 of process 48133] [New LWP 101312 of process 48133] [New LWP 101313 of process 48133] Reading symbols from /usr/obj/usr/src/amd64.amd64/lib/libthr/libthr.so.3.full...done. Reading symbols from /usr/local/lib/libpython2.7.so.1...done. Reading symbols from /usr/obj/usr/src/amd64.amd64/lib/libdl/libdl.so.1.full...done. Reading symbols from /usr/local/lib/libintl.so.8...done. Reading symbols from /usr/obj/usr/src/amd64.amd64/lib/libutil/libutil.so.9.full...done. Reading symbols from /usr/obj/usr/src/amd64.amd64/lib/msun/libm.so.5.full...done. Reading symbols from /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_locale.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_collections.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/operator.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/itertools.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_heapq.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/zlib.so...done. Reading symbols from /lib/libz.so.6...Reading symbols from /usr/lib/debug//lib/libz.so.6.debug...done. done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/bz2.so...done. Reading symbols from /usr/lib/libbz2.so.4...Reading symbols from /usr/lib/debug//usr/lib/libbz2.so.4.debug...done. done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/grp.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/datetime.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/strop.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/time.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/select.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/fcntl.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_struct.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/binascii.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/cStringIO.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_ctypes.so...done. Reading symbols from /usr/local/lib/libffi.so.6...(no debugging symbols found)...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_hashlib.so...done. Reading symbols from /usr/lib/libssl.so.8...Reading symbols from /usr/lib/debug//usr/lib/libssl.so.8.debug...done. done. Reading symbols from /lib/libcrypto.so.8...Reading symbols from /usr/lib/debug//lib/libcrypto.so.8.debug...done. done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/termios.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/cPickle.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_io.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/math.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_random.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_functools.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_socket.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_ssl.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/unicodedata.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/array.so...done. Reading symbols from /usr/local/lib/python2.7/lib-dynload/_bisect.so...done. Reading symbols from /libexec/ld-elf.so.1...Reading symbols from /usr/lib/debug//libexec/ld-elf.so.1.debug...done. done. [Switching to LWP 100848 of process 48133] _umtx_op () at _umtx_op.S:3 3 RSYSCALL(_umtx_op) (gdb) thread 9 [Switching to thread 9 (LWP 101309 of process 48133)] #0 _umtx_op () at _umtx_op.S:3 3 RSYSCALL(_umtx_op) (gdb) frame 7 #7 0x0000000800b55b5b in call_function (pp_stack=0x7fffde3f3148, oparg=0) at Python/ceval.c:4341 4341 C_TRACE(x, (*meth)(self,NULL)); (gdb) print local_fp No symbol "local_fp" in current context. (gdb)
(In reply to Konstantin Belousov from comment #33) frame 5 has that var ? (gdb) thread 9 [Switching to thread 9 (LWP 101309 of process 48133)] #5 0x0000000800a82fb7 in close_the_file (f=0x80fd91b70) at Objects/fileobject.c:457 457 Py_END_ALLOW_THREADS (gdb) frame 5 #5 0x0000000800a82fb7 in close_the_file (f=0x80fd91b70) at Objects/fileobject.c:457 457 Py_END_ALLOW_THREADS (gdb) p *local_fp $3 = {_p = 0x0, _r = 0, _w = 0, _flags = 4, _file = 3, _bf = {_base = 0x0, _size = 0}, _lbfsize = 0, _cookie = 0x801482930, _close = 0x8012453c0 <__sclose>, _read = 0x801245370 <__sread>, _seek = 0x8012453b0 <__sseek>, _write = 0x801245390 <__swrite>, _ub = {_base = 0x0, _size = 0}, _up = 0x0, _ur = 0, _ubuf = "\000\000", _nbuf = "", _lb = {_base = 0x0, _size = 0}, _blksize = 59904, _offset = 2279, _fl_mutex = 0x8008f72c0, _fl_owner = 0x80a5cf600, _fl_count = 1, _orientation = 0, _mbstate = {__mbstate8 = '\000' <repeats 127 times>, _mbstateL = 0}, _flags2 = 0} (gdb) p local_fp $4 = (FILE *) 0x801482930 (gdb)
(In reply to mike from comment #35) Thank you. And please, from the same frame 5, do "print local_close".
(In reply to Konstantin Belousov from comment #36) (gdb) print local_close $5 = (int (*)(FILE *)) 0x801237880 <fclose> (gdb) list 452 Otherwise this close() will crash when flushing the buffer. */ 453 f->f_setbuf = NULL; 454 Py_BEGIN_ALLOW_THREADS 455 errno = 0; 456 sts = (*local_close)(local_fp); 457 Py_END_ALLOW_THREADS 458 f->f_setbuf = local_setbuf; 459 if (sts == EOF) 460 return PyErr_SetFromErrno(PyExc_IOError); 461 if (sts != 0) (gdb)
(In reply to mike from comment #37) Try to apply the following debugging patch: diff --git a/lib/libc/stdio/_flock_stub.c b/lib/libc/stdio/_flock_stub.c index 069ed64c4e9..64c2d769389 100644 --- a/lib/libc/stdio/_flock_stub.c +++ b/lib/libc/stdio/_flock_stub.c @@ -138,4 +138,11 @@ _funlockfile(FILE *fp) _pthread_mutex_unlock(&fp->_fl_mutex); } } + else { + void *p[3]; + p[0] = fp; + p[1] = curthread; + p[2] = fp->_fl_owner; + abort2("funlockfile", 3, p); + } } I wonder if your build dies with a kernel message after it, instead of hanging.
(In reply to Konstantin Belousov from comment #38) # patch -p1 < p Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- a/lib/libc/stdio/_flock_stub.c |+++ b/lib/libc/stdio/_flock_stub.c -------------------------- Patching file lib/libc/stdio/_flock_stub.c using Plan A... Hunk #1 succeeded at 138. done # cd /usr/src/lib/libc # make depend && make # root@amdtestr12:/usr/src/lib/libc # ldd /usr/local/bin/python2.7 /usr/local/bin/python2.7: libthr.so.3 => /usr/obj/usr/src/amd64.amd64/lib/libthr/libthr.so.3.full (0x800645000) libpython2.7.so.1 => /usr/local/lib/libpython2.7.so.1 (0x800a00000) libdl.so.1 => /usr/obj/usr/src/amd64.amd64/lib/libdl/libdl.so.1.full (0x800878000) libintl.so.8 => /usr/local/lib/libintl.so.8 (0x800e71000) libutil.so.9 => /usr/obj/usr/src/amd64.amd64/lib/libutil/libutil.so.9.full (0x80087c000) libm.so.5 => /usr/obj/usr/src/amd64.amd64/lib/msun/libm.so.5.full (0x800892000) libc.so.7 => /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full (0x80107c000) root@amdtestr12:/usr/src/lib/libc # ls -l /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full -rwxr-xr-x 1 root wheel 7919920 Feb 7 14:02 /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full root@amdtestr12:/usr/src/lib/libc # date Wed Feb 7 14:03:39 EST 2018 root@amdtestr12:/usr/src/lib/libc # starting the build cycle. With the debug version of libthr it seems to be less frequent. Thank you for all your help so far!!
(In reply to Konstantin Belousov from comment #38) python2.7(pid 71466 uid 0) aborted: funlockfile(0x80146caa0, 0x80a718600, 0)
(In reply to mike from comment #40) There should be a core file from the python process somewhere. Please load it into gdb and gather the same information (at least backtrace of all threads) as you collected for the deadlock case.
(In reply to Konstantin Belousov from comment #41) It doesnt seem to be dumping core for some reason ? root@amdtestr12:/ # cd / root@amdtestr12:/ # find . -mtime -2h -type f -name "*.core" kern.corefile: %N.core kern.compress_user_cores_level: 6 kern.compress_user_cores: 0 kern.coredump_devctl: 0 kern.coredump: 1 kern.conftxt: options CONFIG_AUTOGENERATED kern.compiler_version: FreeBSD clang version 6.0.0 (branches/release_60 323338) (based on LLVM 6.0.0) kern.coredump_pack_vmmapinfo: 1 kern.coredump_pack_fileinfo: 1 kern.constty_wakeups_per_second: 5 kern.consmsgbuf_size: 8192 kern.consmute: 0 kern.console: ttyu0,ttyv0,gdb,/ttyu0,ttyv0,gdb,
(In reply to mike from comment #42) Check ulimit.
(In reply to Konstantin Belousov from comment #43) its unlimited, the defaults. I changed the core pattern and tested it with another shell script. sysctl -w sysctl kern.corefile=/tmp/core/%N.core root@amdtestr12:/usr/ports/net/samba47 # ls -l /tmp/core/ total 81 -rw------- 1 root wheel 9031680 Feb 7 15:41 sh.core root@amdtestr12:/usr/ports/net/samba47 # I did a kill -11 and it dump the core file # I adjusted my script #!/bin/sh while true do rm -R work ulimit -a sleep 2 make date >> /tmp/build.out find . -name "*.core" | awk '{print "cp "$1" /tmp/"}' | sh done root@amdtestr12:/usr/ports/net/samba47 # ./b cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 33554432 stack size (kbytes, -s) 524288 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 34246 open files (-n) 940698 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited socket buffer size (bytes, -b) unlimited pseudo-terminals (-p) unlimited kqueues (-k) unlimited umtx shared locks (-o) unlimited
(In reply to mike from comment #44) Use this instead. diff --git a/lib/libc/stdio/_flock_stub.c b/lib/libc/stdio/_flock_stub.c index 069ed64c4e9..0180f8f285f 100644 --- a/lib/libc/stdio/_flock_stub.c +++ b/lib/libc/stdio/_flock_stub.c @@ -45,6 +45,8 @@ __FBSDID("$FreeBSD$"); #include <string.h> #include <pthread.h> #include "un-namespace.h" +#include <signal.h> +#include <unistd.h> #include "local.h" @@ -138,4 +140,11 @@ _funlockfile(FILE *fp) _pthread_mutex_unlock(&fp->_fl_mutex); } } + else { + char x[120]; + snprintf(x, sizeof(x), "funlockfile %p %p %p\n", fp, + curthread, fp->_fl_owner); + write(2, x, strlen(x)); + raise(SIGBUS); + } }
Created attachment 190408 [details] thread apply all bt on the core dump file post debug patch sig 10 Path worked!
Do you have the message it printed before the SIGBUS as well? (If you run it in tmux I'll attach and take a look.)
(In reply to mike from comment #46) What was printed on crash ? From thread 1 frame 3 please do "p *fp", "p *(fp->_fl_mutex)".
(In reply to Konstantin Belousov from comment #48) (gdb) thread 1 [Switching to thread 1 (LWP 101411)] #0 thr_kill () at thr_kill.S:3 3 RSYSCALL(thr_kill) (gdb) list 1 #include "compat.h" 2 #include "SYS.h" 3 RSYSCALL(thr_kill) 4 .section .note.GNU-stack,"",%progbits (gdb) frame 3 #3 0x00000008011d7e15 in fileno (fp=0x80146cd10) at /usr/src/lib/libc/stdio/fileno.c:56 56 FUNLOCKFILE(fp); (gdb) list 51 { 52 int fd; 53 54 FLOCKFILE(fp); 55 fd = __sfileno(fp); 56 FUNLOCKFILE(fp); 57 58 return (fd); 59 } 60 (gdb) print *fp $1 = {_p = 0x0, _r = 0, _w = 0, _flags = 4, _file = 10, _bf = {_base = 0x0, _size = 0}, _lbfsize = 0, _cookie = 0x80146cd10, _close = 0x8011a17d0 <__sclose>, _read = 0x8011a1780 <__sread>, _seek = 0x8011a17c0 <__sseek>, _write = 0x8011a17a0 <__swrite>, _ub = {_base = 0x0, _size = 0}, _up = 0x0, _ur = 0, _ubuf = "\000\000", _nbuf = "", _lb = {_base = 0x0, _size = 0}, _blksize = 4096, _offset = 0, _fl_mutex = 0x8061c40c0, _fl_owner = 0x80a718b00, _fl_count = 1, _orientation = 0, _mbstate = {__mbstate8 = '\000' <repeats 127 times>, _mbstateL = 0}, _flags2 = 0} (gdb) print *(fp->_fl_mutex) $2 = {m_lock = {m_owner = 101411, m_flags = 0, m_ceilings = {0, 0}, m_rb_lnk = 0, m_spare = {0, 0}}, m_flags = 1, m_count = 0, m_spinloops = 0, m_yieldloops = 0, m_ps = 0, m_qe = {tqe_next = 0x0, tqe_prev = 0x80a718ca0}, m_pqe = {tqe_next = 0x0, tqe_prev = 0x80a718cb0}, m_rb_prev = 0x0} (gdb) python2.7(pid 71466 uid 0) aborted: funlockfile(0x80146caa0, 0x80a718600, 0) python2.7(pid 28229 uid 0) aborted: funlockfile(0x80146caa0, 0x80b3ea900, 0) python2.7(pid 45173 uid 0) aborted: funlockfile(0x80146caa0, 0x80c867f00, 0) pid 98434 (sh), uid 0: exited on signal 11 (core dumped) pid 2299 (sh), uid 0: exited on signal 6 (core dumped) pid 99069 (python2.7), uid 0: exited on signal 10 (core dumped) pid 91792 (python2.7), uid 0: exited on signal 10 (core dumped) pid 77867 (python2.7), uid 0: exited on signal 10 (core dumped) pid 99925 (python2.7), uid 0: exited on signal 10 (core dumped) pid 42102 (python2.7), uid 0: exited on signal 10 (core dumped) pid 61581 (python2.7), uid 0: exited on signal 10 (core dumped) pid 31770 (python2.7), uid 0: exited on signal 10 (core dumped) pid 17764 (python2.7), uid 0: exited on signal 10 (core dumped)
(In reply to Konstantin Belousov from comment #48) forgot to paste funlockfile 0x80146caa0 0x80d75ff00 0x0 Bus error (core dumped)
(In reply to mike from comment #50) It is not the abort line that was printed for the core analyzed. I expect that the first address printed is 0x80146cd10.
Created attachment 190428 [details] Disable RDFSBASE on zens. This is a pure guess based on the _impression_ I get from looking at the provided logs. Try the patch, it might change the things, or it can just panic at boot since the documentation about zens is non-existing.
(In reply to Konstantin Belousov from comment #52) No luck. It survived a lot of builds, but still dumped core funlockfile 0x80146ce48 0x80bcd1500 0x0 Bus error (core dumped) BTW, DragonFly and Linux seem to have noticed Ryzen issues as well. Not sure if its related to what I am seeing on FreeBSD ? https://utcc.utoronto.ca/~cks/space/blog/linux/KernelRcuNocbsMeaning
Created attachment 190430 [details] latest backtrace post kernel patch
(In reply to mike from comment #53) The article you linked does not provide any useful information. Nobody knows why this knob helped. That said, I am both disappointed and relieved that the path did not helped. So it is not the problem with the %fs/%gs handling, but the issue is completely mysterious then. Can you provide me with your libc.so.7 with the debugging patch applied, used to obtain the logs ?
(In reply to Konstantin Belousov from comment #55) The DragonFly commit mentioned here http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/b48dd28447fc8ef62fbc963accd301557fd9ac20 But I FreeBSD already have this set ? libc.tgz is in your account on zoo.freebsd.org. If there are other files you need, the box is 192.168.5.191
(In reply to mike from comment #56) Yes, that was already handled. And, the hang there is the whole system hang. Please apply this debugging patch and try reproduce the issue one more time. diff --git a/lib/libc/stdio/_flock_stub.c b/lib/libc/stdio/_flock_stub.c index 069ed64c4e9..99e1af6fbfb 100644 --- a/lib/libc/stdio/_flock_stub.c +++ b/lib/libc/stdio/_flock_stub.c @@ -45,6 +45,8 @@ __FBSDID("$FreeBSD$"); #include <string.h> #include <pthread.h> #include "un-namespace.h" +#include <signal.h> +#include <unistd.h> #include "local.h" @@ -116,6 +118,8 @@ _funlockfile(FILE *fp) /* * Check if this file is owned by the current thread: */ + while (*(volatile struct thread * volatile *)&fp->_fl_owner == NULL) + ; if (fp->_fl_owner == curthread) { /* * Check if this thread has locked the FILE @@ -138,4 +142,11 @@ _funlockfile(FILE *fp) _pthread_mutex_unlock(&fp->_fl_mutex); } } + else { + char x[120]; + snprintf(x, sizeof(x), "funlockfile %p %p %p\n", fp, + curthread, fp->_fl_owner); + write(2, x, strlen(x)); + raise(SIGBUS); + } }
Created attachment 190431 [details] with while loop patch funlockfile 0x80146caa0 0x80c83b800 0x0 Bus error (core dumped)
(In reply to mike from comment #58) Lets try a slight variation. diff --git a/lib/libc/stdio/_flock_stub.c b/lib/libc/stdio/_flock_stub.c index 069ed64c4e9..935bf18b529 100644 --- a/lib/libc/stdio/_flock_stub.c +++ b/lib/libc/stdio/_flock_stub.c @@ -39,12 +39,16 @@ #include <sys/cdefs.h> __FBSDID("$FreeBSD$"); +#include <sys/types.h> +#include <machine/atomic.h> #include "namespace.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <pthread.h> #include "un-namespace.h" +#include <signal.h> +#include <unistd.h> #include "local.h" @@ -112,11 +116,14 @@ void _funlockfile(FILE *fp) { pthread_t curthread = _pthread_self(); + pthread_t fl_owner; /* * Check if this file is owned by the current thread: */ - if (fp->_fl_owner == curthread) { + while ((fl_owner = (pthread_t)atomic_load_ptr(&fp->_fl_owner)) == NULL) + ; + if (fl_owner == curthread) { /* * Check if this thread has locked the FILE * more than once: @@ -138,4 +145,11 @@ _funlockfile(FILE *fp) _pthread_mutex_unlock(&fp->_fl_mutex); } } + else { + char x[120]; + snprintf(x, sizeof(x), "funlockfile %p %p %p %p\n", fp, + curthread, fl_owner, fp->_fl_owner); + write(2, x, strlen(x)); + raise(SIGBUS); + } }
(In reply to Konstantin Belousov from comment #59) I have to confess, I might have borked the previous patch and left the while loop commented out :( I have made sure its right this time. Five builds so far root@amdtestr12:/usr/src/lib/libc/stdio # uname -a FreeBSD amdtestr12.sentex.ca 12.0-CURRENT FreeBSD 12.0-CURRENT #2 r328667M: Thu Feb 8 07:14:25 EST 2018 mdtancsa@amdtestr12.sentex.ca:/usr/obj/usr/src/amd64.amd64/sys/DEBUG amd64 root@amdtestr12:/usr/src/lib/libc/stdio # rm _flock_stub.c root@amdtestr12:/usr/src/lib/libc/stdio # cd /usr/src root@amdtestr12:/usr/src # root@amdtestr12:/usr/src # svnlite update -r328667 Updating '.': Restored 'lib/libc/stdio/_flock_stub.c' At revision 328667. root@amdtestr12:/usr/src # root@amdtestr12:/usr/src # patch -p1 < p Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- a/lib/libc/stdio/_flock_stub.c |+++ b/lib/libc/stdio/_flock_stub.c -------------------------- Patching file lib/libc/stdio/_flock_stub.c using Plan A... Hunk #1 succeeded at 39 with fuzz 2. Hunk #2 succeeded at 116. Hunk #3 succeeded at 145. done root@amdtestr12:/usr/src #cd lib/libc; make depend;make root@amdtestr12:/usr/src/lib/libc # date Thu Feb 8 13:23:36 EST 2018 root@amdtestr12:/usr/src/lib/libc # ldd /usr/local/bin/python2.7 /usr/local/bin/python2.7: libthr.so.3 => /usr/obj/usr/src/amd64.amd64/lib/libthr/libthr.so.3.full (0x800645000) libpython2.7.so.1 => /usr/local/lib/libpython2.7.so.1 (0x800a00000) libdl.so.1 => /usr/obj/usr/src/amd64.amd64/lib/libdl/libdl.so.1.full (0x800878000) libintl.so.8 => /usr/local/lib/libintl.so.8 (0x800e71000) libutil.so.9 => /usr/obj/usr/src/amd64.amd64/lib/libutil/libutil.so.9.full (0x80087c000) libm.so.5 => /usr/obj/usr/src/amd64.amd64/lib/msun/libm.so.5.full (0x800892000) libc.so.7 => /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full (0x80107c000) root@amdtestr12:/usr/src/lib/libc # ls -l /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full -rwxr-xr-x 1 root wheel 7919824 Feb 8 13:23 /usr/obj/usr/src/amd64.amd64/lib/libc/libc.so.7.full root@amdtestr12:/usr/src/lib/libc #
(In reply to Konstantin Belousov from comment #59) Its managed 22 builds now without issue. Prior, I could only do 10 at most. I have a separate RELENG_11 box as well. I take it the same patch will work there too ? It applies cleanly and builds. I will install it and test there too. I am curious to see if this fixes the other builds like openjdk. It didnt involve python, but it too would hang. I will let things run for 24hrs, unless there is another crash. Thank you again for all your work with this! Next week we get an Epyc based system to eval and I will test against it too.
(In reply to mike from comment #61) Do not consider it a fix. I do not have any explanation what is going on there.
(In reply to Konstantin Belousov from comment #62) Try this. diff --git a/sys/amd64/amd64/cpu_switch.S b/sys/amd64/amd64/cpu_switch.S index 25e3a592171..ef70eefdf37 100644 --- a/sys/amd64/amd64/cpu_switch.S +++ b/sys/amd64/amd64/cpu_switch.S @@ -155,6 +155,7 @@ ctx_switch_xsave: movq %rdx,%r15 movq %rsi,%rdi callq pmap_activate_sw + mfence SETLK %r15,TD_LOCK(%r13) /* Release the old thread */ sw1: movq TD_PCB(%r12),%r8
(In reply to Konstantin Belousov from comment #63) Thanks! I am running with a new kernel. Should I run python linked to the unmodified libs ?
(In reply to mike from comment #64) Yes, libc must be pristine, otherwise it already hides the bug.
(In reply to Konstantin Belousov from comment #65) OK, I am still rebuilding world. While it rebuilds, I tried applying this patch to a stock RELENG_11 box, and it didnt seem to help. Should I try on releng11 with the other kernel patch that "Disable RDFSBASE on zens." ?
(In reply to mike from comment #66) Let make the things clear. Are all the statements below true ? 1. The while(atomic_load()) patch to libc survived the night run of the continuous samba rebuilds. 2. The cpu_switch.s/mfence patch already failed the samba build test on releng_11. If yes, there is no need to do anything with any of the patches I posted. After the confirmation, there is one more thing I want to try before making a conclusion.
(In reply to Konstantin Belousov from comment #67) yes and yes
(In reply to mike from comment #68) Please try this, it should work and then it would be the last thing I am asking to test. diff --git a/lib/libc/stdio/_flock_stub.c b/lib/libc/stdio/_flock_stub.c index 069ed64c4e9..816e958b1f0 100644 --- a/lib/libc/stdio/_flock_stub.c +++ b/lib/libc/stdio/_flock_stub.c @@ -39,12 +39,16 @@ #include <sys/cdefs.h> __FBSDID("$FreeBSD$"); +#include <sys/types.h> +#include <machine/cpufunc.h> #include "namespace.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <pthread.h> #include "un-namespace.h" +#include <signal.h> +#include <unistd.h> #include "local.h" @@ -112,11 +116,17 @@ void _funlockfile(FILE *fp) { pthread_t curthread = _pthread_self(); + pthread_t owner; /* * Check if this file is owned by the current thread: */ - if (fp->_fl_owner == curthread) { + owner = fp->_fl_owner; + if (owner == NULL) { + mfence(); + owner = fp->_fl_owner; + } + if (owner == curthread) { /* * Check if this thread has locked the FILE * more than once: @@ -138,4 +148,11 @@ _funlockfile(FILE *fp) _pthread_mutex_unlock(&fp->_fl_mutex); } } + else { + char x[120]; + snprintf(x, sizeof(x), "funlockfile %p %p %p %p\n", fp, + curthread, owner, fp->_fl_owner); + write(2, x, strlen(x)); + raise(SIGBUS); + } }
(In reply to Konstantin Belousov from comment #69) OK, just this patch right ? Remove the kernel patches ?
(In reply to mike from comment #70) Yes, just that libc patch.
(In reply to Konstantin Belousov from comment #71) Unfortunately, its back to the original behaviour where it just hangs in the usem state
(In reply to mike from comment #72) To be safe, I am going to do a complete buildworld/kernel and repeat the tests
(In reply to mike from comment #72) I admit that it is strange that it hangs. It is quite possible that mfence trick dos not work, but then the 'else' part of the patch must cause the process to terminate, but not hang.
(In reply to Konstantin Belousov from comment #69) I'm testing net/samba47 builds on my threadripper with a libc including the patch from comment #69 applied to my work tree and it has built successfully many times over the last few hours, whereas it previously hung after <5 iterations. For reference, kernel is https://github.com/emaste/freebsd/commit/a838ff863dd2, libc built from top of tree https://github.com/emaste/freebsd/commit/2d476a3255eaadb419fb22d00857fdfe3f87e338 which includes the third iteration of the libc patch at https://github.com/emaste/freebsd/commit/84e5c12ef0216ab2b023a2d40c39ccb442d6cf04
(In reply to Ed Maste from comment #75) Stalled out again. It managed 15 builds. My RELENG_11 box did as well after 23 builds root@amdtestr12:/usr/ports/sysutils/dmidecode # procstat -k 34990 PID TID COMM TDNAME KSTACK 34990 100918 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_sem2_wait __umtx_op_sem2_wait amd64_syscall fast_syscall_common 34990 101251 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101259 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101261 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101263 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101264 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101265 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101266 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101267 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101269 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101270 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101271 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common 34990 101273 python2.7 - mi_switch sleepq_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex amd64_syscall fast_syscall_common root@amdtestr12:/usr/ports/sysutils/dmidecode #
(In reply to Ed Maste from comment #75) I spoke too soon; I encountered a stuck build on my threadripper in the configuration described in comment 75.
(In reply to Ed Maste from comment #77) Ok. It could be a consequence of the discovered issue occuring in other code fragment, it can be something else. So I can wait for some AMD comments. Or you can independently provide me with the threads backtraces.
BTW, if using the patch from the comment 59, is the hang reproducible ?
(In reply to Konstantin Belousov from comment #79) re: patch 59 I will apply it by itself and try again patch -p1 < p.59 Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- a/lib/libc/stdio/_flock_stub.c |+++ b/lib/libc/stdio/_flock_stub.c -------------------------- Patching file lib/libc/stdio/_flock_stub.c using Plan A... Hunk #1 succeeded at 37 with fuzz 2 (offset -2 lines). Hunk #2 succeeded at 114 (offset -2 lines). Hunk #3 succeeded at 143 (offset -2 lines). done
(In reply to Konstantin Belousov from comment #79) > BTW, if using the patch from the comment 59, is the hang reproducible ? I got it to hang on RELENG_11 just now. Perhaps its some combo of the CPU feature patch and removing optimizations from libthr ? When debugging you had me compile it with -g and -O0
(In reply to mike from comment #81) I can't say anything without debug info. Start with the backtraces, as usual.
Created attachment 190490 [details] process that is hung in usem state with libc path and python linked to debug libs
(In reply to mike from comment #83) It is yet another (different) lock leakage. Please switch to the Thread 4 (LWP 101503) and from the frame 1 do "print *mtx".
(In reply to Konstantin Belousov from comment #84) (gdb) thread 4 [Switching to thread 4 (LWP 101503 of process 43500)] #0 _umtx_op_err () at /usr/src/lib/libthr/arch/amd64/amd64/_umtx_op_err.S:37 37 RSYSCALL_ERR(_umtx_op) (gdb) frame 1 #1 0x0000000800654024 in __thr_umutex_lock (mtx=0x8006640a8, id=101503) at /usr/src/lib/libthr/thread/thr_umtx.c:82 82 _umtx_op_err(mtx, UMTX_OP_MUTEX_WAIT, 0, 0, 0); (gdb) list 77 return (EOWNERDEAD); 78 if (owner == UMUTEX_RB_NOTRECOV) 79 return (ENOTRECOVERABLE); 80 81 /* wait in kernel */ 82 _umtx_op_err(mtx, UMTX_OP_MUTEX_WAIT, 0, 0, 0); 83 } 84 } 85 86 #define SPINLOOPS 1000 (gdb) print *mtx $1 = {m_owner = -2147382146, m_flags = 0, m_ceilings = {0, 0}, m_rb_lnk = 0, m_spare = {0, 0}} (gdb)
The patch in https://docs.freebsd.org/cgi/getmsg.cgi?fetch=417183+0+archive/2018/freebsd-hackers/20180211.freebsd-hackers seems to have fixed this issue for me in prelim testing. On a RELENG 11 box that was easy to reproduce the problem, I was able to do 70 builds over night in the hypervisor and a similar amount in 2 VMs on the box running HEAD and RELENG11 with debugging symbols. I will also confirm on the other box that it too no longer has issues.
(In reply to mike from comment #86) The freebsd-hackers patch is now also available in Phabricator at https://reviews.freebsd.org/D14347 With the D14347 patch applied to the -CURRENT kernel on my Threadripper I was able to build net/samba47 132 times over night. Previously it would hang somewhere between the 1st and 20th iteration or so (in come cases with debugging/diagnostic patches from this PR applied).
Can all cores be used to build packages or does it require BIOS settings to not start the hyperthreading cores ? Any significant performance impact ?
(In reply to Kurt Jaeger from comment #88) I'm running with SMT disabled in BIOS still. I configured it like this while testing/debugging this issue. I will make sure that I have the latest BIOS installed on this machine, and then enable SMT and continue testing. Best, Ed
(In reply to Kurt Jaeger from comment #88) The deadlock I was running into with net/samba47 builds would happen with and without SMT enabled. With the patch at https://reviews.freebsd.org/D14347 I can build net/samba47 reliably with all cores running full out on a zero loaded machine, or a machine that is maxed out with other processes and pressure on RAM. I think there were other unrelated things going on with the platform that seem to be now all resolved for me https://lists.freebsd.org/pipermail/freebsd-stable/2018-February/088383.html CPU: AMD Ryzen 5 1600 Six-Core Processor (3194.07-MHz K8-class CPU) Origin="AuthenticAMD" Id=0x800f11 Family=0x17 Model=0x1 Stepping=1 Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> Features2=0x7ed8320b<SSE3,PCLMULQDQ,MON,SSSE3,FMA,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND> AMD Features=0x2e500800<SYSCALL,NX,MMX+,FFXSR,Page1GB,RDTSCP,LM> AMD Features2=0x35c233ff<LAHF,CMP,SVM,ExtAPIC,CR8,ABM,SSE4A,MAS,Prefetch,OSVW,SKINIT,WDT,TCE,Topology,PCXC,PNXC,DBE,PL2I,MWAITX> Structured Extended Features=0x209c01a9<FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA> XSAVE Features=0xf<XSAVEOPT,XSAVEC,XINUSE,XSAVES> AMD Extended Feature Extensions ID EBX=0x7<CLZERO,IRPerf,XSaveErPtr> SVM: NP,NRIP,VClean,AFlush,DAssist,NAsids=32768 TSC: P-state invariant, performance statistics real memory = 34359738368 (32768 MB) avail memory = 33246248960 (31706 MB) Event timer "LAPIC" quality 100 ACPI APIC Table: <ALASKA A M I > FreeBSD/SMP: Multiprocessor System Detected: 12 CPUs FreeBSD/SMP: 1 package(s) x 6 core(s) x 2 hardware threads random: unblocking device. Firmware Warning (ACPI): Optional FADT field Pm2ControlBlock has valid Length but zero Address: 0x0000000000000000/0x1 (20171214/tbfadt-796) ioapic0 <Version 2.1> irqs 0-23 on motherboard ioapic1 <Version 2.1> irqs 24-55 on motherboard SMP: AP CPU #7 Launched! SMP: AP CPU #11 Launched! SMP: AP CPU #10 Launched! SMP: AP CPU #1 Launched! SMP: AP CPU #5 Launched! SMP: AP CPU #2 Launched! SMP: AP CPU #8 Launched! SMP: AP CPU #3 Launched! SMP: AP CPU #9 Launched! SMP: AP CPU #6 Launched! SMP: AP CPU #4 Launched!
I just confirmed this bug happens on a my Epyc system as well. The compile hung on the first try. A reboot with the patch applied in https://reviews.freebsd.org/D14347 fixed the issue. SuperMicro H11SSL-i board with Epyc CPU CPU: AMD EPYC 7281 16-Core Processor (2100.06-MHz K8-class CPU) Origin="AuthenticAMD" Id=0x800f12 Family=0x17 Model=0x1 Stepping=2 Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT> Features2=0x7ed8320b<SSE3,PCLMULQDQ,MON,SSSE3,FMA,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AESNI,XSAVE,OSXSAVE,AVX,F16C,RDRAND> AMD Features=0x2e500800<SYSCALL,NX,MMX+,FFXSR,Page1GB,RDTSCP,LM> AMD Features2=0x35c233ff<LAHF,CMP,SVM,ExtAPIC,CR8,ABM,SSE4A,MAS,Prefetch,OSVW,SKINIT,WDT,TCE,Topology,PCXC,PNXC,DBE,PL2I,MWAITX> Structured Extended Features=0x209c01a9<FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,SHA> XSAVE Features=0xf<XSAVEOPT,XSAVEC,XINUSE,XSAVES> AMD Extended Feature Extensions ID EBX=0x7<CLZERO,IRPerf,XSaveErPtr> SVM: NP,NRIP,VClean,AFlush,DAssist,NAsids=32768 TSC: P-state invariant, performance statistics real memory = 34359738368 (32768 MB) avail memory = 33210638336 (31672 MB) Event timer "LAPIC" quality 100 ACPI APIC Table: < > FreeBSD/SMP: Multiprocessor System Detected: 32 CPUs FreeBSD/SMP: 1 package(s) x 16 core(s) x 2 hardware threads
A commit references this bug: Author: kib Date: Wed Feb 14 00:31:45 UTC 2018 New revision: 329254 URL: https://svnweb.freebsd.org/changeset/base/329254 Log: Ensure memory consistency on COW. From the submitter description: The process is forked transitioning a map entry to COW Thread A writes to a page on the map entry, faults, updates the pmap to writable at a new phys addr, and starts TLB invalidations... Thread B acquires a lock, writes to a location on the new phys addr, and releases the lock Thread C acquires the lock, reads from the location on the old phys addr... Thread A ...continues the TLB invalidations which are completed Thread C ...reads from the location on the new phys addr, and releases the lock In this example Thread B and C [lock, use and unlock] properly and neither own the lock at the same time. Thread A was writing somewhere else on the page and so never had/needed the lock. Thread C sees a location that is only ever read|modified under a lock change beneath it while it is the lock owner. To fix this, perform the two-stage update of the copied PTE. First, the PTE is updated with the address of the new physical page with copied content, but in read-only mode. The pmap locking and the page busy state during PTE update and TLB invalidation IPIs ensure that any writer to the page cannot upgrade the PTE to the writable state until all CPUs updated their TLB to not cache old mapping. Then, after the busy state of the page is lifted, the faults for write can proceed and do not violate the consistency of the reads. The change is done in vm_fault because most architectures do need IPIs to invalidate remote TLBs. More, I think that hardware guarantees of atomicity of the remote TLB invalidation are not enough to prevent the inconsistent reads of non-atomic reads, like multi-word accesses protected by a lock. So instead of modifying each pmap invalidation code, I did it there. Discovered and analyzed by: Elliott.Rabe@dell.com Reviewed by: markj PR: 225584 (appeared to have the same cause) Tested by: Elliott.Rabe@dell.com, emaste, Mike Tancsa <mike@sentex.net>, truckman Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D14347 Changes: head/sys/vm/vm_fault.c
All working great since this commit for this particular bug