I freebsd-update'd an 11.2 pool+system to 11.3-BETA1 for the purpose of seeing if sequential scrub was implemented and to test if so. During the scrub, I watched the output of iostat -t da -x 1. Usually, the qlen field remains at a low value related to what was configured in /etc/sysctl.conf. In this case, I started with the default of 2 for vfs.zfs.vdev.scrub_max_active, and tweaked it to 3, and 4. I noticed that while the qlen would often report that same number, if would rise up to as high as 24 and fluctuate around higher values for seconds at a time. Interactive performance that accessed the disk was noticeably impacted. I noticed this during the actual scrub phase, I did not make note of what was happening during the earlier scan phase. Other items in the sysctl.conf were: vfs.zfs.vdev.async_read_max_active=4 vfs.zfs.vdev.async_read_min_active=2 vfs.zfs.vdev.async_write_max_active=4 vfs.zfs.vdev.async_write_min_active=1 vfs.zfs.vdev.sync_write_max_active=4 vfs.zfs.vdev.sync_write_min_active=2 vfs.zfs.vdev.sync_read_min_active=2 vfs.zfs.vdev.sync_read_max_active=5 vfs.zfs.top_maxinflight=15 The only vdev was a single hard disk, with GELI on the pool partition. The disk controller was standard intel AHCI on supermicro x9 board. It scrubbed 100G in about 45 minutes, which seems reasonable for sequential scrub on this old disk drive. Was this qlen behavior correct, perhaps a side-effect of how sequential scrub works?
^Triage: I'm sorry that this PR did not get addressed in a timely fashion. By now, the version that it was created against is long out of suppoprt. Please re-open if it is still a problem on a supported version.