Bug 247690

Summary: sysutils/openzfs-kmod performance problem
Product: Ports & Packages Reporter: rob2g2 <rob2g2-freebsd>
Component: Individual Port(s)Assignee: Ryan Moeller <freqlabs>
Status: New ---    
Severity: Affects Some People CC: allanjude, diizzy, freqlabs, grahamperrin, rob2g2-freebsd
Priority: --- Flags: bugzilla: maintainer-feedback? (freqlabs)
Version: Latest   
Hardware: amd64   
OS: Any   
Attachments:
Description Flags
logs + sample test script
none
tests with blocksize128k none

Description rob2g2 2020-07-01 10:03:31 UTC
Created attachment 216107 [details]
logs + sample test script

I started lots of benchmarks concerning ZFS. Among the test candidates was a Notebook with a single SSD disk where I compared the performance of FreeBSDs ZFS shipped with the base system versus openzfs-kmod. I realized that openzfs-kmod is much slower compared to FreeBSDs base zfs. I used the tool benchmarks/fio (3.20).

If I pick one test, of course on the same hardware, the random-read-write test fired up with the "sync"-ioengine and ashift=12, by the command "fio --name=randrw --rw=randrw --direct=1 --ioengine=sync --bs=8k --numjobs=2 --rwmixread=80 --size=1G --runtime=600 --group_reporting" native-zfs delivers 81.7MiB/s read and 20.5MiB/s write bandwidth whereas openzfs-kmod only reaches 9665KiB/s read and 2422KiB/s write bandwidth.

The difference is quite big on all my tests, with combinations of io-engines posixaio, psync, mmap, sync, pvsync, vsync; zfs with ashift=9 and ashift=12.
This was not really a scientific test, but still showed the obvious difference in performance. fio seems to be a linux tool, however, works on FreeBSD - I don't know how the different io-engines are implemented, but it shows the difference in all tests.

Tests on normal hdds are to come too, but they take of course significantly longer. I am also testing geli/zfs-encryption on FreeBSD and Linux (luks encryption). This is out of scope here, just for info that I can provide logs of different zfs combinations (2-disk raidz1/zmirror, 3-disk raidz1) at a later date if desired.

Attached are 24 log files which include the results of some fio tests (and a dd command output/timing). Also, see the included unenc-openzfs.sh for how/what tests were done.
Comment 1 rob2g2 2020-07-01 10:45:28 UTC
openzfs-kmod-2020060300 was used
Comment 2 Allan Jude freebsd_committer freebsd_triage 2020-07-01 17:45:40 UTC
(In reply to rob2g2 from comment #1)
To get a better test, the first thing you will want to do is make sure the 'recordsize' of the dataset you are pointing fio at, is the same as the --bs argument you give fio.

I'll try to look into the logs you attached later.
Comment 3 rob2g2 2020-07-01 21:00:33 UTC
Created attachment 216117 [details]
tests with blocksize128k

did the tests again this time bs=128k as you suggested. The difference is not as dramatic as with the tests before (though the blocksize was the same for openzfs and native zfs)
Comment 4 Daniel Engberg freebsd_committer freebsd_triage 2022-08-08 06:05:47 UTC
Still relevant?