On a zpool with the large_blocks feature enabled, recordsize is capped at 1M, however when attempting to create a dataset with recordsize=2M, zpool claims 128KB is the max.
# zfs create -o recordsize=256k zroot/tmp/256k
# zfs create -o recordsize=512k zroot/tmp/512k
# zfs create -o recordsize=1m zroot/tmp/1m
# zfs create -o recordsize=2m zroot/tmp/2m
cannot create 'zroot/tmp/2m': volume block size must be power of 2 from 512B to 128KB
The error message should be changed, if possible, to let the max be known to be 1M, at least when the large_blocks feature is enabled on the zpool in question.
(In reply to Trond.Endrestol from comment #0)
The situation is more difficult - maximum record size could be changed, see r374637.
So the pseudocode for displaying error message should be like this:
if [large_blocks_enabled == true]
max_block_size = sysctl(vfs.zfs.max_recordsize)
max_block_size = 128k
error("Block size cannot be greater than %d", max_block_size)
(In reply to Anton Sayetsky from comment #1)
Sorry, wrong revision. Correct is r274673
BTW, on 10.2-RELEASE-p7:
root@cs0:~# zfs set recordsize=2m ztemp
cannot set property for 'ztemp': 'recordsize' must be power of 2 from 512B to 1024KB
I have no access to any of -CURRENT machines, but I think that problem has been already fixed there too (because of MFC/MFS rule).