Scenario: - Server A: . running FreeBSD 12.0 . re0: <RealTek 8168/8111 B/C/CP/D/DP/E/F/G PCIe Gigabit Ethernet> . providing virtual disks on ZVOLs . acting as VirtualBox host with virtual disks attached directly (via vmdks pointing to the ZVOLs) . using bridged networking to re0 . providing NFS storage for FreeBSD ports distfiles - Server B: . running FreeBSD 12.0 . em0: <Intel(R) PRO/1000 Network Connection> . acting as VirtualBox host with virtual disks attached via iSCSI from server A . using bridged networking to em0 - Server C: . running Windows 10 Professional . acting as Hyper-V host with virtual disks attached via iSCSI from server A . using bridged networking to the LAN interface - VirtualBox client: . FreeBSD 12.0 i386 . can run on either server A, B, or C . Builds ports, fetching distfiles from a shared directory on host A via NFS. Result: - If the client is run on A or C, NFS between the client and server A runs as usual. - If the client is run on B (i.e., NFS is done over the wire to machine A), NFS has extremely poor performance, to the point of no throughput at all - The iSCSI provisioning from A for the client on B is running at normal speed (i.e., the emulated machine itself sees normal disk performance) - When B was still running 11.2, NFS performance was normal.
The same thing happens with a 64-bit client running FreeBSD 12.0
It seems that also when the client is Linux, NFS performance is very poor.
See also bug #235031 I just reported.
It has turned out that this issue is actually caused by a regression in the em(4) driver - see bug #235031. I installed net/intel-em-kmod and replaced if_em by if_em_updated in /boot/loader.conf, and the issue went away. Even netbooting FreeBSD in a VM now works as before - with the native em(4) driver this is completely impossible because there is nearly zero throughput. Therefore, this issue can be closed. -- Martin