Bug 192889 - accept4 socket hangs in CLOSED (memcached)
Summary: accept4 socket hangs in CLOSED (memcached)
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.0-STABLE
Hardware: amd64 Any
: --- Affects Some People
Assignee: freebsd-bugs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2014-08-21 14:40 UTC by Arne Claus
Modified: 2015-02-23 08:45 UTC (History)
4 users (show)

See Also:


Attachments
Logs form the test (601.83 KB, image/png)
2014-08-22 05:58 UTC, Arne Claus
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Arne Claus 2014-08-21 14:40:08 UTC
We encountered that our memcached instances on FreeBSD 10 were rejecting connections after being under heavy load. The machines showed several thousand connections stuck in CLOSED (sometimes CLOSED_WAIT).

We could reproduce the issue several times using a load script.
There is a corresponding memcached bug with more information on the tests:
https://code.google.com/p/memcached/issues/detail?id=375 

We assume that this issue is a FreeBSD 10 related because:
- The memcached source does not seem to do anything wrong (at first glance)
- The issue does not seem to happen when switching from accept4 to accept (tests pending)
- According to our admins the connections were closed after enabling net.inet.tcp.log_debug
- According to our admins this bug did not appear on FreeBSD 9 machines

We are still running tests to verify that this bug does only happen on FreeBSD 10 and is accept4 related. The current tests point in this direction.
accept4 uses the flag SOCK_NONBLOCK

Memcached version: 1.4.20
FreeBSD version:  10.0-STABLE #0 r269376
Comment 1 Arne Claus 2014-08-22 05:58:00 UTC
Created attachment 146144 [details]
Logs form the test

accept() shows the problem, too.
Comment 2 Arne Claus 2014-08-22 07:33:48 UTC
FreeBSD 9 shows a completely different behavior during the test.
While with FreeBSD 10 the CLOSED and CLOSE_WAIT number go up and down all the time, they stay pretty much stable on FreeBSD 9.

As of this I don't think FreeBSD 9 will show the issue at all which is in line with what our admins observed.
Comment 3 Hiren Panchasara freebsd_committer 2014-08-22 15:45:18 UTC
You see a lot of connections in CLOSED/CLOSED_WAIT and they go away when you close the application memcached. That makes me wonder if this is an application bug leaking FDs and not close()ing some sessions correctly.

But, if enabling net.inet.tcp.log_debug makes the issue go away, it looks like a race in TCP code where adding extra logging is making things slow enough where you do not see the problem anymore. 

My 2 cents.
Comment 4 Arne Claus 2014-08-25 07:26:40 UTC
As we got to reproduce a similar problem using our test on FreeBSD 9 (as reported in the memcached ticket) I currently think it might be something like a race condition in the application, too.

What's strange though is that the responsible code looks mostly single threaded with the exception of the socket being used with asynchronously. So theoretically it could be a race in accept(), too.

The only thing I could think of on memcached's side is that there is a race with libevent so that close and open commands are somehow in the wrong order. But I don't know the code well enough to pin that down. Furthermore this does not really explain why the connections are stuck in CLOSE_WAIT because in that case they wouldn't get closed at all.
Comment 5 Arne Claus 2014-08-28 06:47:06 UTC
We did another test with FreeBSD9, memcached 1.4.20 and libevent 2.x.
This time the server did not get stuck.

We do see different behavior in both servers though.
On FreeBSD 9 we see the CLOSED and CLOSE_WAIT values staying at one number (0 or an initial, low value) for a long time. On FreeBSD 10 they are constantly increasing and decreasing.
We poll these values in intervals of 1 second so it looks like FreeBSD 10 holds the connections in these states a lot longer?

What we also see on both systems (in much higher quantities on FreeBSD 10) is that connections get stuck in CLOSE_WAIT and move to CLOSED after quite a long time (about two hours).

Can someone explain this behavior or describe what's the differences between the two systems in that area?

It might help solving the bug or find a faster reproduction rule.
I assume that there is a race condition - involving the libevent code of memcached - which is just happening more often on FreeBSD 10 due to the seemingly slower handling of closed connections.
Comment 6 Hiren Panchasara freebsd_committer 2014-09-03 19:21:28 UTC
Just for clarification, can you please list all the variables in the play here?

FreeBSD, memchache and libevent. 

Can you please summarize which combinations (with versions) of above 3 works for your and which ones do not?
Comment 7 Arne Claus 2014-09-08 06:25:38 UTC
FreeBSD 9.2-RELEASE-p1 #0 r258345
memcached 1.4.17
libevent 1.4.14b
--- No Problems

FreeBSD 9.2-RELEASE-p1 #0 r258345
memcached 1.4.20
libevent 1.4.14b
--- Stuck

FreeBSD 9.2-RELEASE-p1 #0 r258345
memcached 1.4.20
libevent 2.0.21
--- No Problems

FreeBSD 10.0-STABLE #0 r269376
memcached 1.4.20
libevent 2.0.21
--- Stuck

For more test results and remarks see the memcached issue https://code.google.com/p/memcached/issues/detail?id=375

It is already quite clear that the problem is not bound to the operating system's limits as we can reproduce the problem with smaller settings, too.
Comment 8 Craig Lewis 2014-09-23 17:50:37 UTC
I noticed that libmemcached was last updated in 2012.  There are several bug fixes in that project that might be relevant to this issue.

I can confirm that I'm having this issues, using FreeBSD 10, Memcached 1.4.17, and libmemcached 1.0.7.  I'm connecting to memcached from PHP, using pecl-memcached, which links against libmemcached.


Is there anybody experiencing this issue that is not using libmemcached?
Comment 9 Craig Lewis 2014-09-27 15:30:58 UTC
Mikhail Tsatsenko gave me a build of libmemcached-1.0.18.

I'm still able to reproduce this issue, using the test_memcache script posted on code.google.com.  

[clewis@staging0 ~]$ pkg info | grep memcached
libmemcached-1.0.18            C and C++ client library to the memcached server
memcached-1.4.20_2             High-performance distributed memory object cache system
pecl-memcached-2.2.0_2         PHP extension for interfacing with memcached via libmemcached library
Comment 10 Michael HENNETON 2014-12-08 13:17:45 UTC
(In reply to Arne Claus from comment #7)
> FreeBSD 9.2-RELEASE-p1 #0 r258345
> memcached 1.4.17
> libevent 1.4.14b
> --- No Problems
> 
> FreeBSD 9.2-RELEASE-p1 #0 r258345
> memcached 1.4.20
> libevent 1.4.14b
> --- Stuck
> 
> FreeBSD 9.2-RELEASE-p1 #0 r258345
> memcached 1.4.20
> libevent 2.0.21
> --- No Problems
> 
> FreeBSD 10.0-STABLE #0 r269376
> memcached 1.4.20
> libevent 2.0.21
> --- Stuck
> 
> For more test results and remarks see the memcached issue
> https://code.google.com/p/memcached/issues/detail?id=375
> 
> It is already quite clear that the problem is not bound to the operating
> system's limits as we can reproduce the problem with smaller settings, too.

Does this problem still occur on 10.1-RELEASE ?

Regards,
Comment 11 Arne Claus 2014-12-09 08:58:24 UTC
We haven't tested it yet on FreeBSD 10.1, but there were some updates on the memcached ticket by other users having the same problem.
One thing that might be interesting:

"When it stuck - in kernel log 
kernel: sonewconn: pcb 0xfffff80328c68930: Listen queue overflow: 1 already in queue awaiting acceptance (14114 occurrences)
and this line disappears from netstat -Lan
tcp4  0/0/1024       192.168.0.32.11211"
Comment 12 Arne Claus 2015-01-07 08:07:36 UTC
We're now running tests with the following configuration

FreeBSD 10.1-RELEASE (GENERIC) #0 r274401
memcached 1.4.22
libevent 2.0.21-stable

As of now everything works fine, so the situation has at least improved.
As I did not see any directly related changes on the FreeBSD changelog (the TCP fixes seem not be relevant for this case) my current assumption is that changes in memcached 1.4.22 fixed the issue. I did not test 1.4.22 on FreeBSD 10.0 though.

As the bug shows signs of a race condition especially the threading fixes in memcached 1.4.22 are good candidates for having fixed the issue: https://code.google.com/p/memcached/wiki/ReleaseNotes1422#Memcached_1.4.22_Release_Notes
Comment 13 Arne Claus 2015-01-13 11:19:30 UTC
Further test did not reproduce the error so the chance is high that either FreeBSD 10.1 or Memcached 1.4.22 fixed it. As we don't know what caused the problem it is not 100% sure though.

We've upgraded our servers showing the problem to see if the problem disappears in production,  too.
Comment 14 Arne Claus 2015-01-15 13:47:49 UTC
We tested memcached 1.4.20 again on FreeBSD 10.1 and this version does not show any hangs, too. So at the moment it looks like something on the FreeBSD side caused the issue.

It would be great if someone with a little more knowledge on the network topic could post an overview on what changed in the network stack / accept chain for FreeBSD 10.1 so we can pin down the problem a little bit more.
The current state is a little bit uncomfortable as memcached now seems to run stable but nobody knows why.
The FreeBSD changelog only lists some security issue in the TCP/IP stack but those do not seem very related (I did not see the code though).
Comment 15 mp39590 2015-02-15 06:43:25 UTC
Reason for this bug to happen lies not in the network stack, but in capabilities subsystem.

Memcached consists of a dispatcher thread and several worker threads, which communicates through a pipe, for example if new connection is accepted, dispatcher writes 'c' to a pipe for a selected worker thread (it switches them in round-robin manner), worker thread then popup the connection from the queue and serves it.

Due to a slight race condition in capabilities, kevent() mechanism sometimes may return spurious ENOTCAPABLE errors for the descriptors. It makes libevent to abort the loop which works with the connections and return. Memcached doesn't expect it to happen and worker thread silenty returns[1] and dies. You may see it with procstat command, comparing count of threads in normal and failing situation - you will be one thread short for the last.

Dispatcher is not aware of this catastrophic event, and therefor continues to write "c"'s about new connection to the pipe of that, already dead, thread, but of course no one will serve those connections and they're left on the air.

And reasons why you see it as massive amount of CLOSED\CLOSE_WAIT connections is simply the fact that client by timeout or by any other ways decided to close() its connection. Network stack receives FIN packet and expects our application to issue close() on the descriptor, but since thread is already dead - it will never happen.

This bug was addressed by Mateusz in r273137[2].

[1] - https://github.com/memcached/memcached/blob/master/thread.c#L369
[2] - https://svnweb.freebsd.org/base?view=revision&revision=273137
Comment 16 Arne Claus 2015-02-23 08:45:22 UTC
Thanks for the detailed explanation.
I guess this can now be safely set to close.