Bug 206855 - NFS errors from ZFS backed file system when server under load
Summary: NFS errors from ZFS backed file system when server under load
Status: Closed DUPLICATE of bug 9619
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.2-RELEASE
Hardware: Any Any
: --- Affects Some People
Assignee: Rick Macklem
Depends on:
Reported: 2016-02-02 17:22 UTC by Vick Khera
Modified: 2016-05-08 20:10 UTC (History)
2 users (show)

See Also:


Note You need to log in before you can comment on or make changes to this bug.
Description Vick Khera 2016-02-02 17:22:16 UTC
I posted a question about NFS errors (unable to read directories or files) when the NFS server comes under high load, and at least two other people reported that they observe the same types of failures. The thread is at https://lists.freebsd.org/pipermail/freebsd-questions/2016-February/270292.html

This might be related to bug #132068

It seems that using NFS to share a ZFS data set is not so stable under high load. Here's my original question/bug report:

I have a handful of servers at my data center all running FreeBSD 10.2. On one of them I have a copy of the FreeBSD sources shared via NFS. When this server is running a large poudriere run re-building all the ports I need, the clients' NFS mounts become unstable. That is, the clients keep getting read failures. The interactive performance of the NFS server is just fine, however. The local file system is a ZFS mirror.

What could be causing NFS to be unstable in this situation?


Server "lorax" FreeBSD 10.2-RELEASE-p7 kernel locally compiled, with NFS server and ZFS as dynamic kernel modules. 16GB RAM, Xeon 3.1GHz quad processor.

The directory /u/lorax1 a ZFS dataset on a mirrored pool, and is NFS exported via the ZFS exports file. I put the FreeBSD sources on this dataset and symlink to /usr/src.

Client "bluefish" FreeBSD 10.2-RELEASE-p5 kernel locally compiled, NFS client built in to kernel. 32GB RAM, Xeon 3.1GHz quad processor (basically same hardware but more RAM).

The directory /n/lorax1 is NFS mounted from lorax via autofs. The NFS options are "intr,nolockd". /usr/src is symlinked to the sources in that NFS mount.

What I observe:

[lorax]~% cd /usr/src
[lorax]src% svn status
[lorax]src% w
 9:12AM  up 12 days, 19:19, 4 users, load averages: 4.43, 4.45, 3.61
USER       TTY      FROM                      LOGIN@  IDLE WHAT
vivek      pts/0    vick.int.kcilink.com       8:44AM     - tmux: client (/tmp/
vivek      pts/1    tmux(19747).%0            8:44AM    19 sed y%*+%pp%;s%[^_a
vivek      pts/2    tmux(19747).%1            8:56AM     - w
vivek      pts/3    tmux(19747).%2            8:56AM     - slogin bluefish-prv
[lorax]src% pwd

So right now the load average is more than 1 per processor on lorax. I can quite easily run "svn status" on the source directory, and the interactive performance is pretty snappy for editing local files and navigating around the file system.

On the client:

[bluefish]~% cd /usr/src
[bluefish]src% pwd
[bluefish]src% svn status
svn: E070008: Can't read directory '/n/lorax1/usr10/src/contrib/sqlite3': Partial results are valid but processing is incomplete
[bluefish]src% svn status
svn: E070008: Can't read directory '/n/lorax1/usr10/src/lib/libfetch': Partial results are valid but processing is incomplete
[bluefish]src% svn status
svn: E070008: Can't read directory '/n/lorax1/usr10/src/release/picobsd/tinyware/msg': Partial results are valid but processing is incomplete
[bluefish]src% w
 9:14AM  up 93 days, 23:55, 1 user, load averages: 0.10, 0.15, 0.15
USER       TTY      FROM                      LOGIN@  IDLE WHAT
vivek      pts/0    lorax-prv.kcilink.com     8:56AM     - w
[bluefish]src% df .
Filesystem          1K-blocks    Used     Avail Capacity  Mounted on
lorax-prv:/u/lorax1 932845181 6090910 926754271     1%    /n/lorax1

What I see is more or less random failures to read the NFS volume. When the server is not so busy running poudriere builds, the client never has any failures.

I also observe this kind of failure doing  buildworld or installworld on the client when the server is busy -- I get strange random failures reading the files causing the build or install to fail.

My workaround is to not do build/installs on client machines when the NFS server is busy doing large jobs like building all packages, but there is definitely something wrong here I'd like to fix. I observe this on all the local NFS clients. I rebooted the server before to try to clear this up but it did not fix it.

My intuition is pointing to some sort of race condition with ZFS and NFS, but digging deeper into that is well beyond my pay grade.
Comment 1 rmacklem 2016-02-02 22:49:29 UTC
As someone else suggested, please try adding "-S" to your
mountd flags.

If that doesn't fix the problem, I would suggest removing "intr"
from the mount options, just in case something is posting a signal
to the process (which can cause the read to fail with EINTR when
"intr" is specified).
Comment 2 Vick Khera 2016-02-04 16:00:45 UTC
Thanks. The -S flag to the server's mountd does seem to eliminate the NFS read failures in my initial testing.

(In reply to rmacklem from comment #1)
Comment 3 Vick Khera 2016-02-04 16:02:45 UTC
Also for the record, as per the discussion thread the error is induced by poudriere constantly mounting and unmounting ZFS file systems while it builds packages. This behavior causes mountd to rescan the exports file during which NFS requests could fail. The -S flag works around that window of opportunity to fail.
Comment 4 Rick Macklem freebsd_committer 2016-02-05 00:54:05 UTC
Since use of the "-S" option on mountd seems to resolve the
problem, I am closing this and marking it as a duplicate of 9619.

*** This bug has been marked as a duplicate of bug 9619 ***
Comment 5 commit-hook freebsd_committer 2016-05-08 20:10:39 UTC
A commit references this bug:

Author: rmacklem
Date: Sun May  8 20:10:23 UTC 2016
New revision: 299242
URL: https://svnweb.freebsd.org/changeset/base/299242

  Make "-S" a default option for mountd.

  After a discussion on freebsd-fs@ there seemed to be a consensus that
  the "-S" option for mountd should become the default.
  Since the only known issue w.r.t. using "-S" was fixed by r299201,
  this commit adds "-S" to the default mountd_flags.

  Discussed on:	freebsd-fs
  PR:		9619, 131342, 206855
  MFC after:	2 weeks
  Relnotes:	yes