scp on 12.1p1-RELEASE is painfully slow: 2% 80KB 4.5KB/s 11:25 ETA Network is 10 GBit/s (both hosts are on ESXi). net.inet.tcp.cc.algorithm is newreno (default). No firewall, scp client is on the same network.
Please supply more details on your network configuration beginning with used network cards and drivers.
Both hosts use IPv4 (on the same subnet) and vmx network card driver.
Does it help if you disable various offloads for network interfaces such ax tso4, rxcsum, txcsum etc.? Use: "ifconfig vmx0 -tso4" and so on for both sides. Use "ifconfig vmx0" to verify that offload features got disabled, repeat test and report back.
I see that the 'regression' tag is on this PR, but it's not clear if that's correct - did you use scp on the same hardware/VM configuration in 11.x or 12.0 and it was not slow there? Or is 12.1 the first release you've tried in this configuration?
I'm also experiencing this after upgrading from 11.3 to 12.1. Scp and HTTP GET slows down to 5-10KB/s where I'm expecting 50MB or more. I'm running the VM under ESXI 5.5 as VMXNET 3. ``` vmx0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=e403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6> ether 00:50:56:a9:a5:8f inet 192.168.0.7 netmask 0xffffff00 broadcast 192.168.0.255 media: Ethernet autoselect status: active nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> ``` When I do `ifconfig vmx0 -tso41` I immediately get the proper 50MB/s speeds. When I `ifconfig vmx0 tso4` the slowness comes back.
This is almost certainly a dupe of bug 236999
Sorry for the late reply. (In reply to Brendan Shanks from comment #6) Perhaps. But compiling a kernel without TSO support does not change anything. Meanwhile being at p2 level, ssh sessions constantly freeze after 3-30 seconds. This means 12.1p2-RELEASE is unusable as an ESXi guest. As a KVM guest, no problems.
@Vincenzo Can you advise whether this is a likely dupe of bug 236999 or not?
Closed as duplicate of the PR 236999 which has a fix. *** This bug has been marked as a duplicate of bug 236999 ***