| Summary: | [nis] [patch] VmWare port + NIS causes "broadcast storm" | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Base System | Reporter: | Per Hedeland <per> | ||||||
| Component: | kern | Assignee: | Marcelo Araujo <araujo> | ||||||
| Status: | Closed Unable to Reproduce | ||||||||
| Severity: | Affects Only Me | CC: | araujo | ||||||
| Priority: | Normal | ||||||||
| Version: | 4.5-RELEASE | ||||||||
| Hardware: | Any | ||||||||
| OS: | Any | ||||||||
| Attachments: |
|
||||||||
State Changed From-To: open->feedback Thanks for the bug report and patches. Item 3 has been committed to -stable now (the problem was not present in -current). Your patch for problem 2 (if_tap.c) is probably not necessary, and other network drivers can get into similar states too, so keeping the changes in userland is better. May I close this report now? Maybe you would like to suggest a patch for ypbind to make it sleep for a while after before repeating the clnt_broadcast operation if it fails? I replied directly to iedowse@FreeBSD.org on this issue, but apparently that didn't "take", so here's the gist of that response: ------- > Your patch >for problem 2 (if_tap.c) is probably not necessary, and other network >drivers can get into similar states too, so keeping the changes in >userland is better. Yes, thinking a bit more about it, I agree - and the kernel patch is really a rather broken way to deal with it, since it causes information loss, even if the information is pretty useless in this case. >May I close this report now? Sure. > Maybe you would like to suggest a patch >for ypbind to make it sleep for a while after before repeating the >clnt_broadcast operation if it fails? Well, I looked at it a bit, and doing it "right" is rather more work than I'm prepared to do right now, since that really would need to be tested too. I.e. the main process should wait before forking off a new broadcaster, but I believe it shouldn't block during that wait - so it would need to note the failure (which would need to be returned from rpc_received() via handle_children()), use a shorter select() timeout, and then do the retry when that timeout expires, keeping in mind that other requests may arrive in the meantime so the timeout should really be re-calculated based on gettimeofday() etc... Below is a "stupid" but almost certainly "safe" patch - still untested though. --Per --------------------------- --- /usr/src/usr.sbin/ypbind/ypbind.c.ORIG Sat Jul 7 09:30:51 2001 +++ /usr/src/usr.sbin/ypbind/ypbind.c Sat Jul 6 03:47:33 2002 @@ -596,6 +596,13 @@ struct timeval timeout; fd_set fds; + if (addr->sin_addr.s_addr == (long)0) { + /* Wait a bit before telling parent about failure, since it + will retry immediately - the wait should really be before + that retry in the parent, but this is simpler... */ + sleep(2); + } + timeout.tv_sec = 5; timeout.tv_usec = 0; State Changed From-To: feedback->open Feedback was requested and received. I can personally confirm that ypbind will still flood the network continuously with no backoff if the NIS server goes down on a June 1st, 2003 -STABLE. Responsible Changed From-To: freebsd-bugs->ceri by request Responsible Changed From-To: ceri->freebsd-bugs I'm not really in this project anymore. To submitter: is this aging PR still relevant? I'm taking this PR. I'm unable to reproduce this issue anymore. |
If running the vmware2 port in bridging mode on a system that is a NIS client (without specific server(s) specified to ypbind), and the NIS server is unavailable for some length of time while no vmware host is running, the system will start sending UDP broadcasts to port 111 at an extremely high rate. Observed with a Pentium-III 800Mhz on 100Mb Ethernet: 200 broadcast packets/sec sent - which in turn cause 200 response packets/sec, which in turn cause 200 ICMP port unreachable packets/sec from the FreeBSD, system since nothing there is listening for the responses - in total 600 packets/sec. Fix: The above is actually caused by the interaction of a series of problems: 1) When bridging is chosen at installation of the vmware2 port, the vmnet1 interface is still configured with a "dummy" IP address of its own (192.168.0.1), netmask (255.255.255.0), and corresponding broadcast address (192.168.0.255). As far as I understand, a set of bridged interfaces should have at most one IP address total among them. 2) If packets are sent to *any* address in the "vmnet1 net" besides the configured one (192.168.0.1) when no vmware host is running, sendto() (or whatever) will soon return ENOBUFS, since the "send queue" has filled up. (Needless to say, nothing will ever really receive such packets - but they seem to "disappear" if a vmware host is running.) 3) The RPC broadcast function (/usr/src/lib/libc/rpc/pmap_rmt.c/ clnt_broadcast()) gives up sending immediately (returning RPC_CANTSEND), without even waiting for responses, if sending to any one of the broadcast-capable interfaces fails for whatever reason. 4) Ypbind, when getting any error back from clnt_broadcast(), retries immediately, without any delay or backoff strategy. So, in this scenario, ypbind calls clnt_broadcast(), which sends a packet out the physical interface, then a packet on the vmnet1 interface, gets ENOBUFS and gives up, and ypbind starts the process over again, ad infinitum. The "storm" can be prevented by fixing any one of the problems 1)-4); a real fix (allowing ypbind to succeed) requires a fix to one of 1)-3). Ideally all should be fixed, of course. I worked around problem 1) in FreeBSD 4.2-RELEASE and an other version of the vmware2 port by modifying the vmware config and startup scripts to simply not configure an IP address on vmnet1 when bridging is used - however this does not work in current versions (vmware complains about not being able to get the interface address), at least not the trivial way I did it. Instead I now looked at problem 2), and came up with the first patch below. It seems reasonable to me, solves problem 2), and doesn't seem to affect the "normal" traffic to the vmware host in any way - but I guess I could be missing something... I've also enclosed what seems to me to be a reasonable fix for problem 3), however this is totally untested. I haven't looked for a fix for problem 4), but someone probably should... Patch for problem 2): Patch for problem 3): How-To-Repeat: See description. Killing the ypserv process on the (only) NIS server will cause the problem to appear after some time (not measured); it can be more quickly reproduced by killing and restarting ypbind on the FreeBSD system while the NIS server is down.