Bug 255571 - vmx mtu 9000 no transmission
Summary: vmx mtu 9000 no transmission
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 12.2-STABLE
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-virtualization (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2021-05-03 16:02 UTC by jcaplan
Modified: 2021-05-05 15:19 UTC (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description jcaplan 2021-05-03 16:02:50 UTC
Overview
--------

Setting mtu=9000 for a vmx interface on vmware VM and running iperf2, performance goes down to 272 kb/s from 943 Mb/s with default 1500 mtu.


Steps to Reproduce
------------------

1) On host, change interface to 9000

# ifconfig vmnet20 mtu 9000

2) On target do the same:

root@freebsd: # ifconfig vmx0 mtu 9000

3) Run iperf

root@freebsd: # iperf -s

Actual Results
--------------

On freebsd target:
# ./src/iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
(null)local 172.16.129.205 port 5001 connected with 172.16.129.1 port 34798


On host:
iperf -c 172.16.129.205
------------------------------------------------------------
Client connecting to 172.16.129.205, TCP port 5001
TCP window size:  325 KByte (default)
------------------------------------------------------------
[  3] local 172.16.129.1 port 34798 connected with 172.16.129.205 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec   335 KBytes   272 Kbits/sec


Expected Results
----------------
iperf shouldn't hang on target and performance should be similar to 1500 mtu case.

Build Date & Hardware:
----------------------

FreeBSD freebsd 12.2-RELEASE FreeBSD 12.2-RELEASE r366954 GENERIC  amd64
Comment 1 jcaplan 2021-05-05 15:19:51 UTC
After a bit more investigating, all segments are discarded on the target.

in vmxnet3_isc_rxd_pkt_get(), this condition is reached for all incoming packets with large MTU:


	if (__predict_false(rxcd->error)) {
		rxc->vxcr_pkt_errors++;
		for (i = 0; i < nfrags; i++) {
			frag = &ri->iri_frags[i];
			frag->irf_len = 0;
		}
	}


As far as I can tell I've done everything on the host side with vmware to configure MTU properly.