Bug 238229 - zfs: scrub exceeds I/O qlen scrub_max_active limit, causes latency on 11.3-BETA1
Summary: zfs: scrub exceeds I/O qlen scrub_max_active limit, causes latency on 11.3-BETA1
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 11.2-STABLE
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-fs (Nobody)
Keywords: needs-qa, performance
Depends on:
Reported: 2019-05-30 00:03 UTC by Sceiemu
Modified: 2019-06-01 21:57 UTC (History)
0 users

See Also:


Note You need to log in before you can comment on or make changes to this bug.
Description Sceiemu 2019-05-30 00:03:22 UTC
I freebsd-update'd an 11.2 pool+system to 11.3-BETA1 for the purpose of seeing if sequential scrub was implemented and to test if so.

During the scrub, I watched the output of iostat -t da -x 1.  Usually, the qlen field remains at a low value related to what was configured in /etc/sysctl.conf.  In this case, I started with the default of 2 for vfs.zfs.vdev.scrub_max_active, and tweaked it to 3, and 4.  I noticed that while the qlen would often report that same number, if would rise up to as high as 24 and fluctuate around higher values for seconds at a time.  Interactive performance that accessed the disk was noticeably impacted.  I noticed this during the actual scrub phase, I did not make note of what was happening during the earlier scan phase.

Other items in the sysctl.conf were:

The only vdev was a single hard disk, with GELI on the pool partition.  The disk controller was standard intel AHCI on supermicro x9 board.

It scrubbed 100G in about 45 minutes, which seems reasonable for sequential scrub on this old disk drive.

Was this qlen behavior correct, perhaps a side-effect of how sequential scrub works?