Bug 277584 - Can't connect with SSH after changing net.inet.udp.recvspace on FreeBSD 13.2
Summary: Can't connect with SSH after changing net.inet.udp.recvspace on FreeBSD 13.2
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: conf (show other bugs)
Version: 13.2-STABLE
Hardware: amd64 Any
: --- Affects Only Me
Assignee: freebsd-bugs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-03-08 19:23 UTC by Claude Gilbert
Modified: 2024-05-07 18:59 UTC (History)
2 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Claude Gilbert 2024-03-08 19:23:03 UTC
When increasing net.inet.udp.recvspace on a server (FreeBSD 13.2), I noticed I can no longer SSH into that server if I set the value higher than 1.86 MB.
The current SSH session is kept alive, but if I initiate another simultaneous ssh connection, I get the following error message:

kex_exchange_identification: Connection closed by remote host
Connection closed by 172.31.29.181 port 22

Also, if I close the current SSH session, I am locked out of the server.
In /var/log/messages, I notice the following:

Jan 11 16:57:23 server-1 sshd[14120]: fatal: bad addr or host: <NULL> (Name does not resolve)

I then checked the ena0 interface:

#ifconfig ena0 -v
ifconfig: socket(family 2,SOCK_DGRAM): No buffer space available

Finally, I checked the network memory buffer space:

#netstat -m
3984/2376/6360 mbufs in use (current/cache/total)
0/1270/1270/1004997 mbuf clusters in use (current/cache/total/max)
0/1270 mbuf+clusters out of packet secondary zone in use (current/cache)
3982/1352/5334/502498 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/148888 9k jumbo clusters in use (current/cache/total/max)
0/0/0/83749 16k jumbo clusters in use (current/cache/total/max)
16924K/8542K/25466K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed

As far as I can tell, it doesn't look like network memory buffers are close to full.
I also found that if I increase kern.ipc.maxsockbuf to 3 MB, I am able to increase the net.inet.udp.recvspace to 2 MB and SSH on the server. Apparently, the kern.ipc.maxsockbuf must be set a little higher than the net.inet.udp.recvspace, which makes sense.

The command I used to increase/decrease udp socket buffer space:
#sysctl net.inet.udp.recvspace=value

Server specs:
FreeBSD server-1 13.2-RELEASE-p8 FreeBSD 13.2-RELEASE-p8 GENERIC amd64

I'm not sure whether this is a bug or not, but I have no idea why this happens. This is a reference to the following discussion on the FreeBSD forums:
https://forums.freebsd.org/threads/cant-connect-with-ssh-after-changing-net-inet-udp-recvspace.91874/