Hi, I've posted this on the forum: https://forums.freebsd.org/threads/is-kvm-virtio-really-that-slow-on-freebsd.71186/#post-430366 But I'll reiterate here. Basically, I'm running a FreeBSD 12 image created alongside this document: https://docs.openstack.org/image-guide/freebsd-image.html (plus swap) On a KVM host that is, TTBOMK, Ubuntu 18. Storage is Ceph Blockstorage, using pure NVMe SSDs - but I/O is capped at 2000 IOP/s. Using CentOS as a KVM guest, I basically get this: [root@centos ~]# dc3dd wipe=/dev/sdb dc3dd 7.1.614 started at 2019-06-20 07:54:22 +0000 compiled options: command line: dc3dd wipe=/dev/sdb device size: 83886080 sectors (probed) sector size: 512 bytes (probed) 42949672960 bytes (40 G) copied (100%), 342.37 s, 120 M/s input results for pattern `00': 83886080 sectors in output results for device `/dev/sdb': 83886080 sectors out dc3dd completed at 2019-06-20 08:00:05 +0000 Using FreeBSD, I get this: root@freebsd:~ # dc3dd wipe=/dev/vtbd2 dc3dd 7.2.646 started at 2019-06-20 09:37:10 +0200 compiled options: command line: dc3dd wipe=/dev/vtbd2 device size: 83886080 sectors (probed), 42,949,672,960 bytes sector size: 512 bytes (probed) 42949672960 bytes ( 40 G ) copied ( 100% ), 4585 s, 8.9 M/s input results for pattern `00': 83886080 sectors in output results for device `/dev/vtbd2': 83886080 sectors out dc3dd completed at 2019-06-20 10:53:35 +0200 The forum posting has fio-runs with the actual I/O numbers. Are the drivers really that bad? If this is the situation, FreeBSD is completely useless in the cloud. At least for us.
If the VM is created with openstack image create --file ../freebsd-image/freebsd12_v1.41.qcow2 --disk-format qcow2 --min-disk 6 --min-ram 512 --private --protected --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_qemu_guest_agent=yes --property os_distro=freebsd --property os_version="12.0" "FreeBSD 12.0 amd 64 take3" the results are better. But still way too bad for any production-use.
Still the same with FreeBSD 13.0RC2.
There's https://www.freshports.org/sysutils/kvmclock-kmod/ but it does not make a difference in my case (tested with 13.0).
+1
Hello! It seems I have the same issue with 13.1-RELEASE running as guest in Proxmox 6.4-15. vtblk0: <VirtIO Block Adapter> on virtio_pci1 vtblk0: 40960MB (83886080 512 byte sectors) # dd if=/dev/random of=test.dat bs=1M count=100 iflag=fullblock 100+0 records in 100+0 records out 104857600 bytes transferred in 14.858133 secs (7057253 bytes/sec) sysctl kern.timecounter.hardware HPET or TSC-low does not affect the issue. Linux Mint on the same host machine: # dd if=/dev/urandom of=test.dat bs=1M count=100 iflag=fullblock 100+0 records in 100+0 records out 104857600 bytes (105 MB, 100MiB) copied, 0,588288 s, 178 MB/s If switch VirtIO Block to SATA in VM configuration then speed changes to better: ada0 at ahcich0 bus 0 scbus2 target 0 lun 0 ada0: <QEMU HARDDISK 2.5+> ATA-7 SATA device ada0: Serial Number QM00005 ada0: 150.000MB/s transfers (SATA 1.x, UDMA5, PIO 8192bytes) ada0: Command Queueing enabled ada0: 40960MB (83886080 512 byte sectors) # dd if=/dev/random of=test.dat bs=1M count=100 iflag=fullblock 100+0 records in 100+0 records out 104857600 bytes transferred in 0.354972 secs (295396745 bytes/sec)
I just tried with a the 14-current snapshot of 2022-12-01 and I get about 30MB/s with dc3dd wipe on a 100GB volume. openstack image create "f14_1" --file ~/Downloads/FreeBSD-14.0-CURRENT-amd64.qcow2 --disk-format qcow2 --min-disk 10 --min-ram 1024 --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_qemu_guest_agent=yes --property os_distro=freebsd --property os_admin_user=root --property os_version="14.0" Sadly, this has was one of the major reasons that FreeBSD does not have a future here.