From eb3408d04a42cf9d43396bc2051c90af1adfc6dc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Andreas=20Bj=C3=B8rnestad?= Date: Fri, 5 Feb 2021 09:07:09 +0100 Subject: [PATCH] ZFS: Fix typo and renegade parenthesis --- documentation/content/en/books/handbook/zfs/_index.adoc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/documentation/content/en/books/handbook/zfs/_index.adoc b/documentation/content/en/books/handbook/zfs/_index.adoc index d6cc133db1..991f84f170 100644 --- a/documentation/content/en/books/handbook/zfs/_index.adoc +++ b/documentation/content/en/books/handbook/zfs/_index.adoc @@ -2224,7 +2224,7 @@ In some specific cases, the smaller 512-byte block size might be preferable. Whe * [[zfs-advanced-tuning-prefetch_disable]] `_vfs.zfs.prefetch_disable_` - Disable prefetch. A value of `0` is enabled and `1` is disabled. The default is `0`, unless the system has less than 4 GB of RAM. Prefetch works by reading larger blocks than were requested into the <> in hopes that the data will be needed soon. If the workload has a large number of random reads, disabling prefetch may actually improve performance by reducing unnecessary reads. This value can be adjusted at any time with man:sysctl[8]. * [[zfs-advanced-tuning-vdev-trim_on_init]] `_vfs.zfs.vdev.trim_on_init_` - Control whether new devices added to the pool have the `TRIM` command run on them. This ensures the best performance and longevity for SSDs, but takes extra time. If the device has already been secure erased, disabling this setting will make the addition of the new device faster. This value can be adjusted at any time with man:sysctl[8]. * [[zfs-advanced-tuning-vdev-max_pending]] `_vfs.zfs.vdev.max_pending_` - Limit the number of pending I/O requests per device. A higher value will keep the device command queue full and may give higher throughput. A lower value will reduce latency. This value can be adjusted at any time with man:sysctl[8]. -* [[zfs-advanced-tuning-top_maxinflight]] `_vfs.zfs.top_maxinflight_` - Maxmimum number of outstanding I/Os per top-level <>. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to each <>, <>, or other vdev independently. This value can be adjusted at any time with man:sysctl[8]. +* [[zfs-advanced-tuning-top_maxinflight]] `_vfs.zfs.top_maxinflight_` - Maximum number of outstanding I/Os per top-level <>. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to each <>, <>, or other vdev independently. This value can be adjusted at any time with man:sysctl[8]. * [[zfs-advanced-tuning-l2arc_write_max]] `_vfs.zfs.l2arc_write_max_` - Limit the amount of data written to the <> per second. This tunable is designed to extend the longevity of SSDs by limiting the amount of data written to the device. This value can be adjusted at any time with man:sysctl[8]. * [[zfs-advanced-tuning-l2arc_write_boost]] `_vfs.zfs.l2arc_write_boost_` - The value of this tunable is added to <> and increases the write speed to the SSD until the first block is evicted from the <>. This "Turbo Warmup Phase" is designed to reduce the performance loss from an empty <> after a reboot. This value can be adjusted at any time with man:sysctl[8]. * [[zfs-advanced-tuning-scrub_delay]]`_vfs.zfs.scrub_delay_` - Number of ticks to delay between each I/O during a <>. To ensure that a `scrub` does not interfere with the normal operation of the pool, if any other I/O is happening the `scrub` will delay between each command. This value controls the limit on the total IOPS (I/Os Per Second) generated by the `scrub`. The granularity of the setting is determined by the value of `kern.hz` which defaults to 1000 ticks per second. This setting may be changed, resulting in a different effective IOPS limit. The default value is `4`, resulting in a limit of: 1000 ticks/sec / 4 = 250 IOPS. Using a value of _20_ would give a limit of: 1000 ticks/sec / 20 = 50 IOPS. The speed of `scrub` is only limited when there has been recent activity on the pool, as determined by <>. This value can be adjusted at any time with man:sysctl[8]. @@ -2352,7 +2352,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som |Snapshots can also be cloned. A clone is a writable version of a snapshot, allowing the file system to be forked as a new dataset. As with a snapshot, a clone initially consumes no additional space. As new data is written to a clone and new blocks are allocated, the apparent size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block is decremented. The snapshot upon which a clone is based cannot be deleted because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be _promoted_, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no additional space. Since the amount of space used by the parent and child is reversed, existing quotas and reservations might be affected. |[[zfs-term-checksum]]Checksum -|Every block that is allocated is also checksummed. The checksum algorithm used is a per-dataset property, see <>. The checksum of each block is transparently validated as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the expected checksum, ZFS will attempt to recover the data from any available redundancy, like mirrors or RAID-Z). Validation of all checksums can be triggered with <>. Checksum algorithms include: +|Every block that is allocated is also checksummed. The checksum algorithm used is a per-dataset property, see <>. The checksum of each block is transparently validated as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the expected checksum, ZFS will attempt to recover the data from any available redundancy, like mirrors or RAID-Z. Validation of all checksums can be triggered with <>. Checksum algorithms include: * `fletcher2` * `fletcher4` -- 2.30.0