FreeBSD 10.1-RELEASE-p8 amd64 Hello. I found a strange behavior with ssd drives trim perfomance. I have zfs mirror on 2 ssd drives (ashift=12, aligment 4k). When i migrate sql server to that storage i found that ssd become always busy and having io queue lenth ~60, some count of read and write, and 128 bio_delele (trim) operations in gstat statiscs. After much tests and googling i found sysctl varianle vfs.zfs.vdev.trim_max_active with default value 64 that limit number of active trim operations. Problem appears when zfs continiously fill queue of drive by trim operation(2 times in second). If i change vfs.zfs.vdev.trim_max_active to 1000, zfs send 2000 trim operations to drive per second and drive iops and busy level become to normal state. When i set vfs.zfs.vdev.trim_max_active to low number 8 device have 16 bio_delele per second and device busy level become 100%. I try to work with another partition on same drive(i thinking that it is zfs bug) and found that such operation is also suffer so i concluded that zfs have no guilt. I try to determinate how freebsd calculate busy levels and find that it come fom geom (geom_stats_open geom_stats_snapshot_next geom_stats_snapshot_get...) So when device make 16 trim per second - it 100% busy, latency big, iops slow. When device make 2000 and more trims per second - device free, queue empty, latency great. So what is it? Bug or feature? I also check device trim performance by UFS: my ssd drive can perform about 7000 trim operations per second with block size 64k(and more with less block size). So probably zfs trim (by 128k) will perform 3000 trims, but i can't generate so much trim activity to find exact value.
This is a duplicate of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=197516.
Ouch, sorry for my last comment. Its not a duplicate since this is on ZFS.
And i havn't gmirror.
Hi, This bug still exists on 12.2 but not anymore en 13.0 even with autotrim enabled and vfs.zfs.vdev.trim_max_active=2 by default on 13.0 . I got about the same performance (maybe a little better) with 13.0/autotrim=on/trim_max_active=2 than with 12.2/(autotrim implicit and always on before 13 )/trim_max_active=2000 It looks like openzfs trim strategy is different that previous one and more efficient.
(In reply to Laurent Frigault from comment #4) After running a few mariadb sql import jobs I found that increasing vfs.zfs.vdev.trim_max_active from 2 to 2000 under freebsd13 only decrease the time of the job by 1 minute for 1h30 import (about 1 % gain). => No big gain with trim_max_active increase on freebsd13