Just had no end of trouble getting a newly commissioned system to communicate properly over ixl(4) interfaces. The problem turned out to be that LRO is enabled by default, and interferes seriously with anything involving packet forwarding. I'm not going to be the only one bitten by this so I propose that the default should be -lro. Observed on 12.2-R but almost certainly applies to other versions and probably to other interfaces as well.
To confirm I experienced this issue with LRO on a host causing problems on two VirtualBox virtual machines. One a gateway and one a VPN. In both cases the virtual machines were rendered unusable due to the forwarding elements being restricted. The effect was observable with any client connected to the VPN or using the gateway getting very low transfer speeds of 20kbytes/second or so when going through a tunnel or gateway. The resolution was to restart the networking on the host with the -lro flag (as an aside vboxnet also needed be restarted in order to ensure the host could directly network with the VMs) Also note the information on the intel driver for the interface https://downloadmirror.intel.com/25160/eng/readme.txt QUOTE LRO --- LRO (Large Receive Offload) may provide Rx performance improvement. However, it is incompatible with packet-forwarding workloads. You should carefully evaluate the environment and enable LRO when possible. To enable: # ifconfig ixlX lro It can be disabled by using: # ifconfig ixlX -lro ENQUOTE
agreed that LRO should never be on by default. furthermore it should never be propagated from a trunk interface into vlan subinterfaces. see also #254596.
(In reply to Bob Bishop from comment #0) What kind of packet forwarding setup are we talking here? The network stack code should disable LRO if forwarding in the IP stack https://cgit.freebsd.org/src/tree/sys/net/iflib.c#n2901.
I've also had LRO issues recently and it sounds like it's the same problem as what this ticket describes. Disabling LRO seems to fix the performance problem. Sometimes jails on my server get really poor TCP performance - on the order of 10kB/s. Looking at tcpdump output, it looks like occasionally the jail receives a packet that's bigger than the MTU (e.g., 2896 which is > 1500) which causes the jail to ignore the packet. Then the first ~MTU-worth of data gets retransmitted, the jail ACKs it, and things are ok for a little while before the next >MTU packet arrives. This is what it *looks like* in tcpdump output. I don't know what's actually on the wire, but I suspect that the physical packets are correctly sized because when I disable LRO (which was on by default), the packets never seem to exceed the MTU and I get near line-speed TCP performance. I have seen this on a server with a number of vnet jails. Each jail has a epair that's a member of bridge that's on a vlan on a lagg. More graphically: em[02] -> lagg0 -> lagg0.31 -> bridge31 <- epairNa That is, 2 e1000e cards are combined into lagg0. VLAN 31 on the lagg is a member of the bridge31 bridge. Each jail gets an epair, and the host-side of the epair is also a member of bridge31. # uname -a FreeBSD osiris 14.1-RELEASE-p3 FreeBSD 14.1-RELEASE-p3 GENERIC amd64 # freebsd-version -kru 14.1-RELEASE-p3 14.1-RELEASE-p3 14.1-RELEASE-p3 (I've also seen it on 14.0.) host# ifconfig > /tmp/pre host# ifconfig lagg0 -lro host# ifconfig > /tmp/post host# diff -u /tmp/pre /tmp/post --- /tmp/pre 2024-08-10 14:12:11.812066000 +0000 +++ /tmp/post 2024-08-10 14:13:19.578057000 +0000 @@ -1,11 +1,11 @@ em0: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500 - options=4e524bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> + options=4e520bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> ether 00:25:90:50:81:ed media: Ethernet autoselect (1000baseT <full-duplex>) status: active nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> em2: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500 - options=4e524bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> + options=4e520bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> ether 00:25:90:50:81:ed hwaddr 00:25:90:50:81:ec media: Ethernet autoselect @@ -19,7 +19,7 @@ groups: lo nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> lagg0: flags=1008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500 - options=4e524bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> + options=4e520bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> ether 00:25:90:50:81:ed hwaddr 00:00:00:00:00:00 laggproto lacp lagghash l2,l3