ZFS was grabbing all but 1G of my server-of-all-work's 32G memory (the listed default behavior, which seems ridiculous). To afford other jobs memory in which to run, I put vfs.zfs.arc_max="8G" into /boot/loader.conf and rebooted. But tops and zfs-stats continue to report that ZFS still claims 28G for itself. Thinking it might be a regression to when "M" and "G" weren't being interpreted, I tried entering the limit in bytes (8796093022208) instead but it made no difference.
Please make sure your running at least r302382
Actually that's the fix in stable/11 for stable/10 its r302714
Okay, my version is stale: r286666 It does seem to have been a regression to when "M" and "G" weren't being interpreted (I can't now find that bug again). And quotes also seem to cause a problem when entering the number, which doesn't seem a good idea. But entering the value in bytes *without* quotes may have worked (zfs-stats is a little bit unclear about that, since it still lists 30GB as target and max which is scary) Can I presume that it's fixed in 10.3 and later?
Could you post your loader.conf and the settings from sysctl. I've set a build going my end to get up to date but that will take a while as its not the quickest of machines.
I've tested this on HEAD and have been unable to reproduce. /boot/loader.conf vfs.zfs.arc_max="1G" Resulting arc_max:- sysctl vfs.zfs.arc_max vfs.zfs.arc_max: 1073741824
Apologies, Steven, I've been mad busy /boot/loader.conf: kldload i915kms device vt device vt_vga kldload aio geom_eli_load="YES" vfs.zfs.arc_max=8796093022208 is there a particular subset of sysctl you're interested in? sysctl -Ah is ridiculously long, so if you want it all I'll have to consume at least 10 comments going by my experience so far.
(In reply to Steven Hartland from comment #5) Could it be that that part of the heuristic isn't working? Canonically it should limit to 512M if you've only 1G and try to give it the whole thing, shouldn't it?
(In reply to MMacD from comment #6) The format for loading modules in loader.conf is: <module>_load="YES" not: kldload <module> I suspect your issue is you're not loading ZFS, you should have the following in loader.conf: zfs_load="YES" With regards your value I doubt you want 8TB for you arc_max either.
(In reply to Steven Hartland from comment #8) I don't really know how to respond. I followed the handbook's ZFS Quick start: "There is a startup mechanism that allows FreeBSD to mount ZFS pools during system initialization. To enable it, add this line to /etc/rc.conf: zfs_enable="YES" Then start the service: # service zfs start" and ZFS appears to be working normally apart from ignoring my arc_max directive in loader.conf. Why wouldn't I want an 8GB ARC limit? It's not being stressed by anyone but me. 8GB seems quite a lot, really, given that storage is unlikely to ever exceed 4TB.
Oh, I misread -- I thought you wrote 8GB, not 8TB. You're right - I definitely don't want 8TB considering I've only 32GB in the machine. I have a certain disability for numbers, similar to dyslexia for words. So I didn't notice that I calculated 8TB rather than 8GB. I'll fix it, and maybe the "ignoring" will turn out to have been a factitious problem.
Okay, I changed it to the intended 8589934592, and it seems to have solved the "problem" so I'll mark it closed.
Yes rc.conf can be used to mount volumes from a ZFS pool during system startup, but that method can't be used for booting from a ZFS pool. If your going to use ZFS I would strongly encourage you use it exclusively as ZFS and UFS don't share memory for caching, so picking one or the the other is best.
I tried to use ZFS for the sys partition, but since the installer didn't make provision for copies=, and I didn't have time to teach myself release engineering, I couldn't see the point.
Why would you want to set copies anyway?
(In reply to Steven Hartland from comment #14) Because the only significant advantage of ZFS, as far as I'm concerned, is the anti-bitrot protection, for which copies=2 is the minimum necessary (I'd have done copies=3 since I was installing to a big disc).
You're mistaken there, you don't need copies for that, any of the pool redundancy options help protect against that. "copies=1 | 2 | 3 Controls the number of copies of data stored for this dataset. These copies are in addition to any redundancy provided by the pool, for example, mirroring or RAID-Z. The copies are stored on different disks, if possible. The space used by multiple copies is charged to the associated file and dataset, changing the used property and counting against quotas and reservations. Changing this property only affects newly-written data. Therefore, set this property at file system creation time by using the -o copies=N option. " As the OS source is always something you can recover, as its not unique, its easy to recover. If you did want copies set you can always set it after base install then go through a separate installworld etc or full send and receive to ensure its activated. Also don't forget that even without any recovery knowing you have bitrot is also good.
I don't think I'm mistaken. There was no such option in the 10.2 installer. And since replication isn't retroactive, by the time I could set it, the o/s would already be installed and thus unprotected and unprotectable. Now, I could perhaps have moved all the o/s bits off the disk over onto my 3-way mirror, set the switch on the o/s disk, and moved the bits back again, but I didn't think of that at the time because I was trying to figure out how I could find and move unrotted copies of my work files onto the mirror. I filed a bug/feature request for a copies= addition to the installer, but it will probably be ignored.
TBH I don't know of anyone who uses copies as its only one part of the bigger resiliency picture. It's much more common to use mirror or RAIDZ* which gives you both forms of protection and I believe those are part of the installer already. I say believe as we use mfsbsd for all our installs: http://mfsbsd.vx.sk/ Don't forget that in using UFS there not only do you not get bit-rot protection you also don't get detection of it which you get with ZFS no matter how you configure it. We've been using ZFS in FreeBSD since its integration and I can honestly say it makes the operation and maintenance so much easier we would literally never go back.