Summary: | [zfs] large_blocks enabled on pool, recordsize capped at 1M, minor typo in error message when attempting to create dataset with recordsize=2M | ||
---|---|---|---|
Product: | Base System | Reporter: | Trond Endrestøl <Trond.Endrestol> |
Component: | kern | Assignee: | freebsd-fs (Nobody) <fs> |
Status: | New --- | ||
Severity: | Affects Some People | CC: | vsasjason |
Priority: | --- | ||
Version: | CURRENT | ||
Hardware: | amd64 | ||
OS: | Any |
Description
Trond Endrestøl
2014-11-23 11:56:48 UTC
(In reply to Trond.Endrestol from comment #0) The situation is more difficult - maximum record size could be changed, see r374637. So the pseudocode for displaying error message should be like this: if [large_blocks_enabled == true] max_block_size = sysctl(vfs.zfs.max_recordsize) else max_block_size = 128k error("Block size cannot be greater than %d", max_block_size) (In reply to Anton Sayetsky from comment #1) Sorry, wrong revision. Correct is r274673 BTW, on 10.2-RELEASE-p7: root@cs0:~# zfs set recordsize=2m ztemp cannot set property for 'ztemp': 'recordsize' must be power of 2 from 512B to 1024KB root@cs0:~# I have no access to any of -CURRENT machines, but I think that problem has been already fixed there too (because of MFC/MFS rule). |