Bug 204178 - stress2 on arm64 thr1 hangs after printing pthread_join error
Summary: stress2 on arm64 thr1 hangs after printing pthread_join error
Status: Closed Unable to Reproduce
Alias: None
Product: Base System
Classification: Unclassified
Component: threads (show other bugs)
Version: CURRENT
Hardware: arm64 Any
: --- Affects Only Me
Assignee: Andrew Turner
Depends on:
Reported: 2015-10-31 19:50 UTC by Andrew Turner
Modified: 2017-01-10 09:25 UTC (History)
5 users (show)

See Also:


Note You need to log in before you can comment on or make changes to this bug.
Description Andrew Turner freebsd_committer 2015-10-31 19:50:47 UTC
thr1: pthread_join(195): No such file or directory

Happens on the 48 core Cavium ThunderX in the cluster.
Comment 1 Andrew Turner freebsd_committer 2015-11-26 12:36:33 UTC
I'm not sure if the message is related to the hang. I've seen each independent of each other.

It seems the process is stuck in the kernel waiting on a mutex:

# pprocstat -t 19405
  PID    TID COMM             TDNAME           CPU  PRI STATE   WCHAN    
19405 100607 thr1             -                 -1  120 sleep   umtxn     
19405 101334 thr1             -                 -1  152 sleep   umtxn     

# procstat -k 19405
  PID    TID COMM             TDNAME           KSTACK                       
19405 100607 thr1             -                mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex do_el0_sync 
19405 101334 thr1             -                mi_switch sleepq_catch_signals sleepq_wait_sig _sleep umtxq_sleep do_lock_umutex __umtx_op_wait_umutex do_el0_sync
Comment 2 Peter Holm freebsd_committer 2015-11-28 07:24:51 UTC
Could you try to run this test, in order to narrow the test scenario a bit.
I have tried this on amd64/i386 without finding any issues.

Place this in stress2/misc as thr1.sh and run it:


. ../default.cfg

export runRUNTIME=1h
export thr1LOAD=100
export TESTPROGS="

(cd ..; ./testcases/run/run $TESTPROGS)

Thank you.
Comment 3 Andrew Turner freebsd_committer 2015-11-28 10:31:07 UTC
With this script I can reproduce the issue. It can take a few hours to show up so I increased the runtime to 24 hours.
Comment 4 Peter Holm freebsd_committer 2015-11-28 11:18:49 UTC
So the scenario is creating many threads, which returns almost immediately.
This during VM pressure.
Comment 5 Andrew Turner freebsd_committer 2015-11-28 11:41:17 UTC
I have some code that inspects the state when the issue show up. Below is a dump of the registers of the only thread in the process.

  x0 = 000000004048fd50
  x1 = 0000000000000011
  x2 = 0000000000000000
  x3 = 0000000000000000
  x4 = 0000000000000000
  x5 = 0000000000000001
  x6 = 0000000000000001
  x7 = 000000000000007f
  x8 = 00000000000001c6
  x9 = 0000000080000000
 x10 = 00000000000187dd
 x11 = 00000000000187dd
 x12 = 0000000000000001
 x13 = 000000004048fcd8
 x14 = 00000000000187dd
 x15 = 0000000000000000
 x16 = 0000000040485df8
 x17 = 00000000404fe8dc
 x18 = 0000000040801530
 x19 = 000000004048fd50
 x20 = 00000000000187dd
 x21 = 0000000040490000
 x22 = 000000004048fd50
 x23 = 0000000000412000
 x24 = 0000000000000000
 x25 = 00000000004014f0
 x26 = 0000000000000000
 x27 = 0000000000000000
 x28 = 0000000000000000
 x29 = 0000007fffffee50
  lr = 0000000040466eb0
  sp = 0000007fffffee40
 elr = 00000000404fe8e0
spsr = 90000000

I looked at the data passed to the kernel in x0 and found the owner of the lock to be the current thread. I also looked at a stack trace and found we entered the lock by the following calls:

_pthread_create -> _thr_alloc -> __thr_umutex_lock -> _umtx_op

The lock in _thr_alloc is, as far as I can tell, the only place within this function we acquire this lock, and is protecting _tcb_ctor.
Comment 6 Konstantin Belousov freebsd_committer 2015-11-28 13:14:14 UTC
(In reply to Andrew Turner from comment #5)
Could you instrument the tcb_lock to add the atomic counters for acquires and releases ?  Then we would see the generation counts for acq/rel on tcb_lock, in particular, whether something was missed at unlock, or e.g. a thread was terminated without unlock (weird).
Comment 7 Andrew Turner freebsd_committer 2015-11-29 22:35:52 UTC
It doesn't seem to be specific to any one lock. I've seen similar hangs with just thr1 and no swap running, and have seen the same symptom using rwlocks.
Comment 8 Konstantin Belousov freebsd_committer 2015-11-30 09:43:41 UTC
(In reply to Andrew Turner from comment #7)
Then it sounds as if the issue is in suword() or casueword().  I recently re-read the arm64 implementations, but did not noted anything obviously wrong.

It could be a hw errata, after all.  E.g. might stxr return 0 but still fail the store ?
Comment 9 martin 2015-11-30 19:16:45 UTC
(In reply to Andrew Turner from comment #5)
Aren't the locks in _thr_alloc only used by threads that call pthread_create?  For the code at https://svnweb.freebsd.org/base/user/pho/stress2/testcases/thr1/thr1.c, this only happens sequentially from the main thread, so that suggests something went wrong releasing the lock on the userland side (it shouldn't need to trap into the kernel).
Comment 10 Konstantin Belousov freebsd_committer 2015-12-01 08:45:22 UTC
(In reply to martin from comment #9)
Yes, the garbage collection code that gc's freed thread structures is only called from the thr_alloc(), which in context of the thr1.c means that only main thread acquires the tcb_lock.  This, together with the note that other locks are similarly affected, mostly reinforces my suspect of the ll/sc hardware implementation.
Comment 11 Ed Maste freebsd_committer 2016-06-22 14:33:18 UTC
Andy, has this been observed on Pass 1.1 only (and not tested on Pass 2.0)?
Comment 12 Andrew Turner freebsd_committer 2016-06-22 14:38:44 UTC
I haven't tried on Pass 2.0.
Comment 13 Andrew Turner freebsd_committer 2016-10-10 14:09:40 UTC
This may be related to Cavium erratum 26026. If so it only affects ThunderX pass 1.x CPUs.

Comment 14 Andrew Turner freebsd_committer 2017-01-10 09:25:41 UTC
I'm unable to reproduce on later hardware without the listed erratum, as such I'm closing this under the assumption it's a known hardware bug in pre-production hardware.