Using a nanobsd image setup so that /vad/db/entropy is a nullfs of a directory on the s4 slice, /data/entrpoy, we have found that when /usr/libexec/sava-entropy the i-node of the deleted saved-entropy.8 file is not released. The following is from a system that has been running for a little bit more than 5 hours: root@bifrost:/usr/home/admglz # ls -i /var/db/entropy/ 805 saved-entropy.1 803 saved-entropy.3 801 saved-entropy.5 799 saved-entropy.7 804 saved-entropy.2 802 saved-entropy.4 800 saved-entropy.6 798 saved-entropy.8 root@bifrost:/usr/home/admglz # df -i Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/da0s1a 1911407 353408 1405086 20% 6855 237943 3% / devfs 1 1 0 100% 0 0 100% /dev /dev/md0 4380 2940 1092 73% 449 1085 29% /etc /dev/md1 4380 1056 2976 26% 162 1372 11% /var /dev/da0s4 7840 168 7045 2% 67 955 7% /data /data/quagga 7840 168 7045 2% 67 955 7% /etc/local/quagga /conf/base/var/db/pkg 1911407 353408 1405086 20% 6855 237943 3% /var/db/pkg /data/entropy 7840 168 7045 2% 67 955 7% /var/db/entropy /data/crontabs 7840 168 7045 2% 67 955 7% /var/cron/tabs /data/home 7840 168 7045 2% 67 955 7% /usr/home root@bifrost:/usr/home/admglz # umount /var/db/entropy root@bifrost:/usr/home/admglz # mount /var/db/entropy root@bifrost:/usr/home/admglz # df -i Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/da0s1a 1911407 353408 1405086 20% 6855 237943 3% / devfs 1 1 0 100% 0 0 100% /dev /dev/md0 4380 2940 1092 73% 449 1085 29% /etc /dev/md1 4380 1056 2976 26% 162 1372 11% /var /dev/da0s4 7840 112 7101 2% 39 983 4% /data /data/quagga 7840 112 7101 2% 39 983 4% /etc/local/quagga /conf/base/var/db/pkg 1911407 353408 1405086 20% 6855 237943 3% /var/db/pkg /data/crontabs 7840 112 7101 2% 39 983 4% /var/cron/tabs /data/home 7840 112 7101 2% 39 983 4% /usr/home /data/entropy 7840 112 7101 2% 39 983 4% /var/db/entropy root@bifrost:/usr/home/admglz # ls -i /var/db/entropy/ 805 saved-entropy.1 803 saved-entropy.3 801 saved-entropy.5 799 saved-entropy.7 804 saved-entropy.2 802 saved-entropy.4 800 saved-entropy.6 798 saved-entropy.8 As can be seen, the unmount/mount operation releases 28 i-nodes, matching a rate of 5 unliks per hour for 5h36m. This means that in this case the filesystem will run out of i-nodes in a little more than a week, which is the interval we have been seeing. How-To-Repeat: mount -t nullfs /data/entropy /var/db/entropy df -i wait for a while to allow the crontab save-entrpoy script to unlink a number of last saved files. df -i
Responsible Changed From-To: freebsd-bugs->freebsd-fs Over to maintainer(s).
This PR is tagged 9.1-STABLE, but I just rediscovered this issue on 10.2-RELEASE. Easiest repro I could find: # preparation mkdir /testbed cd /testbed mkdir tmpmnt nullmnt mount -t tmpfs -o rw,size=10240 tmptestbed /testbed/tmpmnt/ # test 1 mount_nullfs /testbed/tmpmnt/ /testbed/nullmnt/ df -hi | grep testbed dd if=/dev/zero of=nullmnt/testfile # to fill up the tmpfs df -hi | grep testbed # the filesystem is now full rm nullmnt/testfile df -hi | grep testbed # bug: the filesystem is still full umount /testbed/nullmnt df -hi | grep testbed # the inode is released only once the nullfs is unmounted # test 2: nocache mount_nullfs -o nocache /testbed/tmpmnt/ /testbed/nullmnt/ df -hi | grep testbed dd if=/dev/zero of=nullmnt/testfile rm nullmnt/testfile df -hi | grep testbed # everything is working properly here umount /testbed/nullmnt So a standard nullfs mount is unusable for long-term operation. I've only found out about the nocache option from an old mailing list post related to this PR, so after 2.5 years without a fix a note about this workaround on the manpage would be great.
Created attachment 164869 [details] Force reclaim of the upper vnode on unlink.
A commit references this bug: Author: kib Date: Wed Dec 30 19:49:22 UTC 2015 New revision: 292961 URL: https://svnweb.freebsd.org/changeset/base/292961 Log: Force nullfs vnode reclaim after unlinking, to potentially unlink lower vnode. Otherwise, reference to the lower vnode from the upper one prevents final unlink. PR: 178238 Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Changes: head/sys/fs/nullfs/null_vnops.c
A commit references this bug: Author: pho Date: Wed Dec 30 20:15:30 UTC 2015 New revision: 292962 URL: https://svnweb.freebsd.org/changeset/base/292962 Log: Added a regression test. PR: 178238 Sponsored by: EMC / Isilon storage division Changes: user/pho/stress2/misc/nullfs16.sh
This patch does not appear to be in base/stable/9 as of r309554. Would it be possible to add it? I have a 9 system that "fills up" a nullfs mount about a month after a reboot when I update and don't remember to re-apply the patch. Thanks!
A commit references this bug: Author: kib Date: Tue Dec 6 10:37:28 UTC 2016 New revision: 309609 URL: https://svnweb.freebsd.org/changeset/base/309609 Log: MFC r292961, r295717: Force nullfs vnode reclaim after unlinking and directory removal. PR: 178238 Changes: _U stable/9/ _U stable/9/sys/ _U stable/9/sys/fs/ stable/9/sys/fs/nullfs/null_vnops.c