I'm currently running FreeNas 11 RC1. I've edited the tunables for this item kern.ipc.soacceptqueue to allow for 2048 connections, but it doesn't propagate to the jails on my system. Below is the error I received saying the listen queue is full.
May 19 18:43:24 maverick kernel: sonewconn: pcb 0xfffff8035bef6740: Listen queue overflow: 193 already in queue awaiting acceptance (138 occurrences)
May 19 19:02:16 maverick kernel: sonewconn: pcb 0xfffff8035bef6740: Listen queue overflow: 193 already in queue awaiting acceptance (200 occurrences)
May 19 19:03:17 maverick kernel: sonewconn: pcb 0xfffff8035bef6740: Listen queue overflow: 193 already in queue awaiting acceptance (209 occurrences)
May 19 19:04:17 maverick kernel: sonewconn: pcb 0xfffff8035bef6740: Listen queue overflow: 193 already in queue awaiting acceptance (199 occurrences)
May 19 19:05:17 maverick kernel: sonewconn: pcb 0xfffff8035bef6740: Listen queue overflow: 193 already in queue awaiting acceptance (202 occurrences)
Here are the Netstat outputs from my main FreeNas system:
tcp4 0/0/2048 127.0.0.1.8542
tcp4 0/0/2048 127.0.0.1.8600
tcp4 0/0/2048 127.0.0.1.8500
tcp4 0/0/2048 127.0.0.1.8400
And the output from Netstat for my jail:
tcp4 0/0/128 192.168.0.20.12348
tcp6 0/0/128 *.51413
tcp4 0/0/128 *.51413
tcp4 0/0/128 *.9091
If I run sysctl kern.ipc.soacceptqueue in the jail it shows the following:
# sysctl kern.ipc.soacceptqueue
At least on vanilla FreeBSD kern.ipc.soacceptqueue merely specifies an upper limit,
it does not prevent the application from requesting a smaller one.
Many applications use a hardcoded value like 128 without checking
if a higher value would work.
For details see the "listen" and "getsockopt" man pages.
I've checked multiple jails on my system, which include CouchPotato, Plex and Transmission and all have the same max of 128, yet when running sysctl kern.ipc.soacceptqueue it says that is 2048. I find it highly unlikely that 3 separate applications are applying limits to the jails.
kern.ipc.soacceptqueue is SYSCTL_PROC defined in sys/kern/uipc_socket.c without CTLFLAG_VNET, so it is not VIMAGE/VNET-aware currently.
That makes more sense. Is there a way around this or a way to increase the connection queue in a jail that has VIMAGE enabled?
(In reply to john.leo from comment #4)
You could just patch sys/sys/socket.h and increase value on a line "#define SOMAXCONN 128" (this is used to specify initial value for a sysctl only).
Or, if you are curious enough, you can add CTLFLAG_VNET flag to declaration of soacceptqueue in sys/kern/uipc_socket.c and see if it will work or crash :-)
That is, until SomeOne (TM) prepares complete solution.
Created attachment 183099 [details]
make per-VNET soacceptqueue/somaxconn and numopensockets
At attempt to make sysctl soacceptqueue (somaxconn) and numopensockets per-VNET/VIMAGE instead of global.
(In reply to john.leo from comment #4)
You may also try attached patch. Beware, as it is compile-only tested.
Thanks, would this be for the main OS itself or for use in the jail? Also I'm having trouble finding that file within the OS, I couldn't find it at the path you provided.
(In reply to john.leo from comment #8)
The patch is for sys/kern/uipc_socket.c - that is, for kernel. Jails do not have own kernels, there is only one kernel in running system. You need to rebuild and reinstall kernel after patch and reboot the system to apply changes.
That is, for main OS and /usr/src/sys/kern/uipc_socket.c
Thanks, for some reason it is showing that /usr/src is an empty directory on my system. I think I'd also like to set up a dev system to test this out rather than run it in production.
Making these variables per-VNET is not necessarily a good idea; it means a VNET-jail consumer could possibly DoS the system without the administrator having a chance to prevent this easily by exceeding resources.
Need to be very careful. I'd hope if this should go into HEAD that there'll be a way to "cap" the values or reject excessive requests by some metric at least.
(In reply to Bjoern A. Zeeb from comment #12)
These variables are global currently but this does not mean the limits they impose are "global" in any way: static u_int somaxconn is just default for per-socket backlog limit so->so_qlimit (struct socket *so) and this change makes it possible to assign different defaults per-jail.
Yes, increase of such limit allows jailed root to get more space in the queue of not accepted yet sockets but theres is already plenty ways to consume such resources (f.e. by creating listening socket and making tons of local connections). Perhaps, this sysctl should be made read-only for jailed root, if possible.
V_numopensockets is purely informational.