net.link.ether.inet.maxhold is how many packets we'll hold onto while waiting for ARP/NDP resolution to occur, per entry which we're trying to resolve. Since the sysctl was introduced, the value has been kept at 1, for compatibility.
An RFC 8305 Happy Eyeballs compliant DNS resolver will send out concurrent DNS queries for both A and AAAA records.
When the MAC needed to reach the DNS resolver is not currently in the neighbor cache, with the default maxhold of 1 the first DNS query is dropped, to be replaced by the second. The resolver then has to retry DNS to get that answer.
If the first qtype resolution is for the only address-family which exists in DNS, then the real answer is lost and there's a DNS timeout delay before connections can proceed.
If the DNS client is looking up "A and AAAA" records before returning, this then leads to spurious 5 second delays in DNS resolution (assuming default timeouts, default system tuning, etc).
The Golang DNS resolver by default does concurrent DNS resolution. This led to strange delays in connecting to a server for my client tool, but only on FreeBSD. It took ... "a lot of debugging" to trace back to "it's DNS", "it's DNS being retried", "we're only seeing the second request go out", "this is BSD documented behavior". The Go maintainers are now looking at how to adapt for FreeBSD compatibility (I filed https://github.com/golang/go/issues/43398 and traced the issue down there).
The equivalent control option in Linux is /proc/sys/net/ipv4/neigh/default/unres_qlen which Linux man-pages document as being 3, but the kernel Documentation/networking/ip-sysctl.rst file documents as being 101, and that's the real value.
If we could raise the default value of `net.link.ether.inet.maxhold` on FreeBSD, it would reduce spurious DNS failures in a world where there are increasing numbers of concurrent queries because application concurrency is increasing and standards-track RFCs are encouraging this behavior.
I've used sysctl.conf to raise this for me locally, but this is going to be a broader hard-to-debug problem for many people. The problem was aggravated for me by using Jails with vnet and the role-specific jail being idle much of the time, until I interacted with it.
The description in arp(4), second paragraph, talks about the limit being 1 without mentioning the sysctl; if this problem report is fixed, that should be changed too.
If changing the kernel default is not acceptable then perhaps this should be a default uncommented entry in sbin/sysctl/sysctl.conf instead.
^Triage: assign. Also fix the Summary a bit because I could not parse it otherwise.
Thanks for taking this to root cause. I'll keep an eye on this PR and find appropriate folks to prod.
FWIW, when I look on a mac mini provided by MacStadium for an open source project I'm involved with, I see "net.link.ether.inet.maxhold: 16"; I don't know if that's Darwin/macOS default or hosting platform tuning.
For my own systems, I stuck a wet finger in the air and came up with "10" as a limit.
The Linux folks tried 3 and it clearly wasn't enough, since they went to 101.
Matching Darwin at 16 seems sane to me.
Raised [D28068](https://reviews.freebsd.org/D28068) with a proposed change.