Bug 145189 - [nfs] nfsd performs abysmally under load
Summary: [nfs] nfsd performs abysmally under load
Status: Closed Overcome By Events
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: Unspecified
Hardware: Any Any
: Normal Affects Only Me
Assignee: Bugmeister
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2010-03-30 06:10 UTC by Rich Ercolani
Modified: 2025-01-28 12:19 UTC (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Rich Ercolani 2010-03-30 06:10:02 UTC
nfsd performs abysmally on this machine under conditions in which Solaris's NFS implementation is reasonably fast, and while local IO to the same filesystems is still zippy.

For instance, copying a 4GB file over NFSv3 from a ZFS filesystem with the following flags [rw,nosuid,hard,intr,nofsc,tcp,vers=3,rsize=8192,wsize=8192,sloppy,addr=X.X.X.X](Linux client, the above is the server), I achieve 2 MB/s, fluctuating between 1 and 3. (pv reports 2.23 MB/s avg)

Locally, on the server, I achieve 110-140 MB/s (at the end of pv, it reports 123 MB/s avg).

I'd assume network latency, but nc with no flags other than port achieves 30-50 MB/s between server and client.

Latency is also abysmal - ls on a randomly chosen homedir full of files, according to time, takes:
real    0m15.634s
user    0m0.012s
sys     0m0.097s
while on the local machine:
real	0m0.266s
user	0m0.007s
sys	0m0.000s

The server in question is a 3GHz Core 2 Duo, running FreeBSD RELENG_8. The kernel conf, DTRACE_POLL, is just the stock AMD64 kernel with all of the DTRACE-related options turned on, as well as the option to enable polling in the NIC drivers, since we were wondering if that would improve our performance.

We tested this with a UFS directory as well, because we were curious if this was an NFS/ZFS interaction - we still got 1-2 MB/s read speed and horrible latency while achieving fast throughput and latency local to the server, so we're reasonably certain it's not "just" ZFS, if there is indeed any interaction there.

Read speed of a randomly generated 6500 MB file on UFS over NFSv3 with the same flags as above: 1-3 MB/s, averaging 2.11 MB/s
Read speed of the same file, local to the server: consistently between 40-60 MB/s, averaging 61.8 MB/s [it got faster over time - presumably UFS was aggressively caching the file, or something?]
Read speed of the same file over NFS again, after the local test:
Amusingly, worse (768 KB/s-2.2 MB/s, with random stalls - average reported 270 KB/s(!)).

How-To-Repeat: 1) Mount multiple NFS filesystems from the server
2) Watch as your operations latency and throughput rapidly sink to near-zero
Comment 1 Bruce Evans freebsd_committer freebsd_triage 2010-03-30 16:50:16 UTC
On Tue, 30 Mar 2010, Rich Ercolani wrote:

>> Description:
> nfsd performs abysmally on this machine under conditions in which Solaris's NFS implementation is reasonably fast, and while local IO to the same filesystems is still zippy.

Please don't format lines for 200+ column terminals.

Does it work better when limited to 1 thread (nfsd -n 1)?  In at least
some versions of it (or maybe in nfsiod), multiple threads fight each other
under load.

> For instance, copying a 4GB file over NFSv3 from a ZFS filesystem with the following flags [rw,nosuid,hard,intr,nofsc,tcp,vers=3,rsize=8192,wsize=8192,sloppy,addr=X.X.X.X](Linux client, the above is the server), I achieve 2 MB/s, fluctuating between 1 and 3. (pv reports 2.23 MB/s avg)
>
> Locally, on the server, I achieve 110-140 MB/s (at the end of pv, it reports 123 MB/s avg).
>
> I'd assume network latency, but nc with no flags other than port achieves 30-50 MB/s between server and client.
>
> Latency is also abysmal - ls on a randomly chosen homedir full of files, according to time, takes:
> real    0m15.634s
> user    0m0.012s
> sys     0m0.097s
> while on the local machine:
> real	0m0.266s
> user	0m0.007s
> sys	0m0.000s

It probably is latency.  nfs is very latency-sensitive when there are lots
of small files.  Transfers of large files shouldn't be affected so much.

> The server in question is a 3GHz Core 2 Duo, running FreeBSD RELENG_8. The kernel conf, DTRACE_POLL, is just the stock AMD64 kernel with all of the DTRACE-related options turned on, as well as the option to enable polling in the NIC drivers, since we were wondering if that would improve our performance.

Enabling polling is a good way to destroy latency.  A ping latency of
more that about 50uS causes noticable loss of performance for nfs, but
LAN latency is usually a few times higher than that, and polling without
increasing the clock interrupt frequency to an excessively high value
gives a latency of at least 20 times higher than that.  Also, -current
with debugging options is so bloated that even localhost has a ping
latency of about 50uS on a Core2 (up from 2uS for FreeBSD-4 on an
AthlonXP).  Anyway try nfs on localhost to see if reducing the latency
helps.

> We tested this with a UFS directory as well, because we were curious if this was an NFS/ZFS interaction - we still got 1-2 MB/s read speed and horrible latency while achieving fast throughput and latency local to the server, so we're reasonably certain it's not "just" ZFS, if there is indeed any interaction there.

After various tuning and bug fixing (now partly committed by others) I get
improvements like the following on low-end systems with ffs (I don't use
zfs):
- very low end with 100Mbps ethernet: little change; bulk transfers always
   went at near wire speed (about 10 MB/S)
- low end with 1Gbps/S: bulk transfers up from 20MB/S to 45MB/S (local ffs
   50MB/S).  buildworld over nfs of 5.2 world down from 1200 seconds to 800
   seconds (this one is very latency-sensitive.  Takes about 750 seconds on
   local ffs).

> Read speed of a randomly generated 6500 MB file on UFS over NFSv3 with the same flags as above: 1-3 MB/s, averaging 2.11 MB/s
> Read speed of the same file, local to the server: consistently between 40-60 MB/s, averaging 61.8 MB/s [it got faster over time - presumably UFS was aggressively caching the file, or something?]

You should use a file size larger than the size of main memory to prevent
caching, especially for reads.  That is 1GB on my low-end systems.

> Read speed of the same file over NFS again, after the local test:
> Amusingly, worse (768 KB/s-2.2 MB/s, with random stalls - average reported 270 KB/s(!)).

The random stalls are typical of the problem with the nfsd's getting
in each other's way, and/or of related problems.  The stalls that I
saw were very easy to see in real time using "netstat -I <interface>
1" -- they happened every few seconds and lasted a second or 2.  But
they were never long enough to reduce the throughput by more than a
factor of 3, so I always got over 19 MB/S.  The throughput was reduced
by approximately the ratio of stalled time to non-stalled time.

>> How-To-Repeat:
> 1) Mount multiple NFS filesystems from the server
> 2) Watch as your operations latency and throughput rapidly sink to near-zero

Multiple active nfs mounts are probably a different problem.  You certainly
need more than 1 nfsd and/or nfsiod to handle them, and the stalls might
be a result of not having enough daemons.

Bruce
Comment 2 Rich Ercolani 2010-03-30 17:29:37 UTC
On Tue, Mar 30, 2010 at 11:50 AM, Bruce Evans <brde@optusnet.com.au> wrote:
> Does it work better when limited to 1 thread (nfsd -n 1)? =A0In at least
> some versions of it (or maybe in nfsiod), multiple threads fight each oth=
er
> under load.

It doesn't seem to - nfsd -n 1 still ranges between 1-3 MB/s for files
> RAM on server or client (6 and 4 GB, respectively).

>> For instance, copying a 4GB file over NFSv3 from a ZFS filesystem with t=
he
>> following flags
>> [rw,nosuid,hard,intr,nofsc,tcp,vers=3D3,rsize=3D8192,wsize=3D8192,sloppy=
,addr=3DX.X.X.X](Linux
>> client, the above is the server), I achieve 2 MB/s, fluctuating between =
1
>> and 3. (pv reports 2.23 MB/s avg)
>>
>> Locally, on the server, I achieve 110-140 MB/s (at the end of pv, it
>> reports 123 MB/s avg).
>>
>> I'd assume network latency, but nc with no flags other than port achieve=
s
>> 30-50 MB/s between server and client.
>>
>> Latency is also abysmal - ls on a randomly chosen homedir full of files,
>> according to time, takes:
>> real =A0 =A00m15.634s
>> user =A0 =A00m0.012s
>> sys =A0 =A0 0m0.097s
>> while on the local machine:
>> real =A0 =A00m0.266s
>> user =A0 =A00m0.007s
>> sys =A0 =A0 0m0.000s
>
> It probably is latency. =A0nfs is very latency-sensitive when there are l=
ots
> of small files. =A0Transfers of large files shouldn't be affected so much=
.

Sure, and next on my TODO is to look into whether 9.0-CURRENT makes
certain ZFS high-latency things perform better.

>> The server in question is a 3GHz Core 2 Duo, running FreeBSD RELENG_8. T=
he
>> kernel conf, DTRACE_POLL, is just the stock AMD64 kernel with all of the
>> DTRACE-related options turned on, as well as the option to enable pollin=
g in
>> the NIC drivers, since we were wondering if that would improve our
>> performance.
>
> Enabling polling is a good way to destroy latency. =A0A ping latency of
> more that about 50uS causes noticable loss of performance for nfs, but
> LAN latency is usually a few times higher than that, and polling without
> increasing the clock interrupt frequency to an excessively high value
> gives a latency of at least 20 times higher than that. =A0Also, -current
> with debugging options is so bloated that even localhost has a ping
> latency of about 50uS on a Core2 (up from 2uS for FreeBSD-4 on an
> AthlonXP). =A0Anyway try nfs on localhost to see if reducing the latency
> helps.

Actually, we noticed that throughput appeared to get marginally better whil=
e
causing occasional bursts of crushing latency, but yes, we have it on in th=
e
kernel without using it in any actual NICs at present. :)

But yes, I'm getting 40-90+ MB/s, occasionally slowing to 20-30 MB/s,
average after copying a 6.5 GB file of 52.7 MB/s, on localhost IPv4,
with no additional mount flags. {r,w}size=3D8192 on localhost goes up to
80-100 MB/s, with occasional sinks to 60 (average after copying
another, separate 6.5 GB file: 77.3 MB/s).

Also:
64 bytes from 127.0.0.1: icmp_seq=3D0 ttl=3D64 time=3D0.015 ms
64 bytes from 127.0.0.1: icmp_seq=3D1 ttl=3D64 time=3D0.049 ms
64 bytes from 127.0.0.1: icmp_seq=3D2 ttl=3D64 time=3D0.012 ms
64 bytes from [actual IP]: icmp_seq=3D0 ttl=3D64 time=3D0.019 ms
64 bytes from [actual IP]: icmp_seq=3D1 ttl=3D64 time=3D0.015 ms

>> We tested this with a UFS directory as well, because we were curious if
>> this was an NFS/ZFS interaction - we still got 1-2 MB/s read speed and
>> horrible latency while achieving fast throughput and latency local to th=
e
>> server, so we're reasonably certain it's not "just" ZFS, if there is ind=
eed
>> any interaction there.
>
> After various tuning and bug fixing (now partly committed by others) I ge=
t
> improvements like the following on low-end systems with ffs (I don't use
> zfs):
> - very low end with 100Mbps ethernet: little change; bulk transfers alway=
s
> =A0went at near wire speed (about 10 MB/S)
> - low end with 1Gbps/S: bulk transfers up from 20MB/S to 45MB/S (local ff=
s
> =A050MB/S). =A0buildworld over nfs of 5.2 world down from 1200 seconds to=
 800
> =A0seconds (this one is very latency-sensitive. =A0Takes about 750 second=
s on
> =A0local ffs).

Is this on 9.0-CURRENT, or RELENG_8, or something else?

>> Read speed of a randomly generated 6500 MB file on UFS over NFSv3 with t=
he
>> same flags as above: 1-3 MB/s, averaging 2.11 MB/s
>> Read speed of the same file, local to the server: consistently between
>> 40-60 MB/s, averaging 61.8 MB/s [it got faster over time - presumably UF=
S
>> was aggressively caching the file, or something?]
>
> You should use a file size larger than the size of main memory to prevent
> caching, especially for reads. =A0That is 1GB on my low-end systems.

I didn't mention the server's RAM, explicitly, but it has 6 GB of real
RAM, and the files used were 6.5-7 GB each in that case (I did use a
4GB file earlier - I've avoided doing that again here).

>> Read speed of the same file over NFS again, after the local test:
>> Amusingly, worse (768 KB/s-2.2 MB/s, with random stalls - average report=
ed
>> 270 KB/s(!)).
>
> The random stalls are typical of the problem with the nfsd's getting
> in each other's way, and/or of related problems. =A0The stalls that I
> saw were very easy to see in real time using "netstat -I <interface>
> 1" -- they happened every few seconds and lasted a second or 2. =A0But
> they were never long enough to reduce the throughput by more than a
> factor of 3, so I always got over 19 MB/S. =A0The throughput was reduced
> by approximately the ratio of stalled time to non-stalled time.

I believe it. I'm seeing at least partially similar behavior here,
when I mention
the performance drops where transfer briefly pauses and then picks up again
in the localhost case, even with nfsd -n 1 and nfsiod -n 1.

- Rich
Comment 3 Bruce Evans freebsd_committer freebsd_triage 2010-03-30 21:11:32 UTC
On Tue, 30 Mar 2010, Rich wrote:

> On Tue, Mar 30, 2010 at 11:50 AM, Bruce Evans <brde@optusnet.com.au> wrote:


>>> For instance, copying a 4GB file over NFSv3 from a ZFS filesystem with the
>>> following flags
>>> [rw,nosuid,hard,intr,nofsc,tcp,vers=3,rsize=8192,wsize=8192,sloppy,addr=X.X.X.X](Linux
>>> client, the above is the server), I achieve 2 MB/s, fluctuating between 1
>>> and 3. (pv reports 2.23 MB/s avg)


I also tried various nfs r/w sizes and tcp/udp.  The best sizes are
probably the fs block size or twice that (normally 16K for ffs).  Old
versions of FreeBSD had even more bugs in this area and gave surprising
performance differences depending on the nfs r/w sizes or application
i/o sizes.  In some cases smaller sizes worked best, apparently because
they avoided the stalls.

>>> ...
>> Enabling polling is a good way to destroy latency.  A ping latency of
>> ...


> Actually, we noticed that throughput appeared to get marginally better while
> causing occasional bursts of crushing latency, but yes, we have it on in the
> kernel without using it in any actual NICs at present. :)
>
> But yes, I'm getting 40-90+ MB/s, occasionally slowing to 20-30 MB/s,
> average after copying a 6.5 GB file of 52.7 MB/s, on localhost IPv4,
> with no additional mount flags. {r,w}size=8192 on localhost goes up to
> 80-100 MB/s, with occasional sinks to 60 (average after copying
> another, separate 6.5 GB file: 77.3 MB/s).


I thought you said you often got 1-3MB/S.

> Also:
> 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.015 ms
> 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms
> 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.012 ms


Fairly normal slowness for -current.

> 64 bytes from [actual IP]: icmp_seq=0 ttl=64 time=0.019 ms
> 64 bytes from [actual IP]: icmp_seq=1 ttl=64 time=0.015 ms


Are these with external hardware NICs?  Then 15 uS is excellent.  Better
than I've ever seen.  Very good hardware might be able to do this, but
I suspect it is for the local machine.  BTW, I don't like the times
been reported in ms and sub-uS times not being supported.  I sometimes
run Linux or cygwin ping and don't like it not supporting sub-mS times,
so that it always reports 0 for my average latency of 100 uS.

>> After various tuning and bug fixing (now partly committed by others) I get
>> improvements like the following on low-end systems with ffs (I don't use
>> zfs):
>> - very low end with 100Mbps ethernet: little change; bulk transfers always
>>  went at near wire speed (about 10 MB/S)
>> - low end with 1Gbps/S: bulk transfers up from 20MB/S to 45MB/S (local ffs
>>  50MB/S).  buildworld over nfs of 5.2 world down from 1200 seconds to 800
>>  seconds (this one is very latency-sensitive.  Takes about 750 seconds on
>>  local ffs).
>
> Is this on 9.0-CURRENT, or RELENG_8, or something else?


Mostly with 7-CURRENT or 8-CURRENT a couple of years ago.  Sometimes with
a ~5.2-SERVER.  nfs didn't vary much with the server, except there were
surprising differences due to latency that I never tracked down.

I forgot to mention another thing you can try easily:

- negative name caching.  Improves latency.  I used this to reduce makeworld
   times significantly, and it is now standard in -current but not
   enabled by default.

Bruce
Comment 4 Garrett Cooper 2010-03-30 21:44:05 UTC
On Tue, Mar 30, 2010 at 1:11 PM, Bruce Evans <brde@optusnet.com.au> wrote:
> On Tue, 30 Mar 2010, Rich wrote:
>
>> On Tue, Mar 30, 2010 at 11:50 AM, Bruce Evans <brde@optusnet.com.au>
>> wrote:
>
>>>> For instance, copying a 4GB file over NFSv3 from a ZFS filesystem with
>>>> the
>>>> following flags
>>>>
>>>> [rw,nosuid,hard,intr,nofsc,tcp,vers=3D3,rsize=3D8192,wsize=3D8192,slop=
py,addr=3DX.X.X.X](Linux
>>>> client, the above is the server), I achieve 2 MB/s, fluctuating betwee=
n
>>>> 1
>>>> and 3. (pv reports 2.23 MB/s avg)
>
> I also tried various nfs r/w sizes and tcp/udp. =A0The best sizes are
> probably the fs block size or twice that (normally 16K for ffs). =A0Old
> versions of FreeBSD had even more bugs in this area and gave surprising
> performance differences depending on the nfs r/w sizes or application
> i/o sizes. =A0In some cases smaller sizes worked best, apparently because
> they avoided the stalls.
>
>>>> ...
>>>
>>> Enabling polling is a good way to destroy latency. =A0A ping latency of
>>> ...
>
>> Actually, we noticed that throughput appeared to get marginally better
>> while
>> causing occasional bursts of crushing latency, but yes, we have it on in
>> the
>> kernel without using it in any actual NICs at present. :)
>>
>> But yes, I'm getting 40-90+ MB/s, occasionally slowing to 20-30 MB/s,
>> average after copying a 6.5 GB file of 52.7 MB/s, on localhost IPv4,
>> with no additional mount flags. {r,w}size=3D8192 on localhost goes up to
>> 80-100 MB/s, with occasional sinks to 60 (average after copying
>> another, separate 6.5 GB file: 77.3 MB/s).
>
> I thought you said you often got 1-3MB/S.
>
>> Also:
>> 64 bytes from 127.0.0.1: icmp_seq=3D0 ttl=3D64 time=3D0.015 ms
>> 64 bytes from 127.0.0.1: icmp_seq=3D1 ttl=3D64 time=3D0.049 ms
>> 64 bytes from 127.0.0.1: icmp_seq=3D2 ttl=3D64 time=3D0.012 ms
>
> Fairly normal slowness for -current.
>
>> 64 bytes from [actual IP]: icmp_seq=3D0 ttl=3D64 time=3D0.019 ms
>> 64 bytes from [actual IP]: icmp_seq=3D1 ttl=3D64 time=3D0.015 ms
>
> Are these with external hardware NICs? =A0Then 15 uS is excellent. =A0Bet=
ter
> than I've ever seen. =A0Very good hardware might be able to do this, but
> I suspect it is for the local machine. =A0BTW, I don't like the times
> been reported in ms and sub-uS times not being supported. =A0I sometimes
> run Linux or cygwin ping and don't like it not supporting sub-mS times,
> so that it always reports 0 for my average latency of 100 uS.
>
>>> After various tuning and bug fixing (now partly committed by others) I
>>> get
>>> improvements like the following on low-end systems with ffs (I don't us=
e
>>> zfs):
>>> - very low end with 100Mbps ethernet: little change; bulk transfers
>>> always
>>> =A0went at near wire speed (about 10 MB/S)
>>> - low end with 1Gbps/S: bulk transfers up from 20MB/S to 45MB/S (local
>>> ffs
>>> =A050MB/S). =A0buildworld over nfs of 5.2 world down from 1200 seconds =
to 800
>>> =A0seconds (this one is very latency-sensitive. =A0Takes about 750 seco=
nds on
>>> =A0local ffs).
>>
>> Is this on 9.0-CURRENT, or RELENG_8, or something else?
>
> Mostly with 7-CURRENT or 8-CURRENT a couple of years ago. =A0Sometimes wi=
th
> a ~5.2-SERVER. =A0nfs didn't vary much with the server, except there were
> surprising differences due to latency that I never tracked down.
>
> I forgot to mention another thing you can try easily:
>
> - negative name caching. =A0Improves latency. =A0I used this to reduce ma=
keworld
> =A0times significantly, and it is now standard in -current but not
> =A0enabled by default.

    Have you also tried tuning via sysctl (vfs.nfs* ?)
Thanks,
-Garrett
Comment 5 Mark Linimon freebsd_committer freebsd_triage 2010-04-05 02:13:00 UTC
Responsible Changed
From-To: freebsd-bugs->freebsd-fs

reclassify.
Comment 6 Eitan Adler freebsd_committer freebsd_triage 2017-12-31 07:59:36 UTC
For bugs matching the following criteria:

Status: In Progress Changed: (is less than) 2014-06-01

Reset to default assignee and clear in-progress tags.

Mail being skipped
Comment 7 Mark Linimon freebsd_committer freebsd_triage 2025-01-28 12:19:19 UTC
^Triage: I'm sorry that this PR did not get addressed in a timely fashion.

By now, the version that it was created against is long out of support.
Please re-open if it is still a problem on a supported version.