(This PR is probably related to kern/84953) Hi! I'm having a problem with rpc.lockd Clients are mainly FreeBSD 5.4-stable, server is FreeBSD-4.11-RELEASE-p13. /home is nfs mounted via amd. rpc.statd and rpc.lockd is on for both clients and server Since we started using eclipse (which requires file locking to operate) we get problems where the number of open priviledged UDP ports on the server are exhausted. running sockstat | grep rpc.lock show over 400 lines like this: root rpc.lock 174 440 udp4 *:580 *:* The default setting for FreeBSD-4.11 seems to be a portrange of 400 ports, I've bumped this by lowering net.inet.ip.portrange.lowlast from 600 -> 500, but eventually they will all be in use too. I have not found good way to free them properly apart from rebooting the server. All nfs traffic is UDP only, perhaps that is a problem? nfsiod -n 4 on all clients nfs_server_flags="-u -n 12" on the server, perhaps it is too low? There are only six workstations attached to the server. Each have about ten rpc.lockd sockets open, while the server has about 400, so it seems to be a leakage in the server? See also kern/84953.
Responsible Changed From-To: freebsd-bugs->dfr I'm re-writing the NLM.
dfr 2008-03-26 15:23:13 UTC FreeBSD src repository Modified files: lib/libc/gen lockf.c lib/libc/sys Symbol.map fcntl.2 sys/compat/freebsd32 syscalls.master sys/compat/linux linux_file.c sys/compat/svr4 svr4_fcntl.c sys/conf NOTES files options sys/contrib/opensolaris/uts/common/fs/zfs zfs_vnops.c sys/fs/msdosfs msdosfs_vnops.c sys/fs/tmpfs tmpfs_vnops.c sys/i386/ibcs2 ibcs2_fcntl.c sys/kern kern_descrip.c kern_lockf.c syscalls.master vnode_if.src sys/nfs4client nfs4_vnops.c sys/nfsclient nfs_lock.c nfs_vnops.c sys/rpc types.h sys/sys fcntl.h lockf.h sys/ufs/ufs ufs_vnops.c usr.sbin Makefile usr.sbin/rpc.lockd lockd.c rpc.lockd.8 Added files: sys/nlm nlm.h nlm_prot.h nlm_prot_clnt.c nlm_prot_impl.c nlm_prot_server.c nlm_prot_svc.c nlm_prot_xdr.c sm_inter.h sm_inter_xdr.c sys/rpc auth.h auth_none.c auth_unix.c authunix_prot.c clnt.h clnt_dg.c clnt_rc.c clnt_stat.h clnt_vc.c getnetconfig.c inet_ntop.c inet_pton.c netconfig.h nettype.h pmap_prot.h rpc.h rpc_callmsg.c rpc_com.h rpc_generic.c rpc_msg.h rpc_prot.c rpcb_clnt.c rpcb_clnt.h rpcb_prot.c rpcb_prot.h svc.c svc.h svc_auth.c svc_auth.h svc_auth_unix.c svc_dg.c svc_generic.c svc_vc.c xdr.h sys/xdr xdr.c xdr_array.c xdr_mbuf.c xdr_mem.c xdr_reference.c xdr_sizeof.c usr.sbin/clear_locks Makefile clear_locks.8 clear_locks.c Log: Add the new kernel-mode NFS Lock Manager. To use it instead of the user-mode lock manager, build a kernel with the NFSLOCKD option and add '-k' to 'rpc_lockd_flags' in rc.conf. Highlights include: * Thread-safe kernel RPC client - many threads can use the same RPC client handle safely with replies being de-multiplexed at the socket upcall (typically driven directly by the NIC interrupt) and handed off to whichever thread matches the reply. For UDP sockets, many RPC clients can share the same socket. This allows the use of a single privileged UDP port number to talk to an arbitrary number of remote hosts. * Single-threaded kernel RPC server. Adding support for multi-threaded server would be relatively straightforward and would follow approximately the Solaris KPI. A single thread should be sufficient for the NLM since it should rarely block in normal operation. * Kernel mode NLM server supporting cancel requests and granted callbacks. I've tested the NLM server reasonably extensively - it passes both my own tests and the NFS Connectathon locking tests running on Solaris, Mac OS X and Ubuntu Linux. * Userland NLM client supported. While the NLM server doesn't have support for the local NFS client's locking needs, it does have to field async replies and granted callbacks from remote NLMs that the local client has contacted. We relay these replies to the userland rpc.lockd over a local domain RPC socket. * Robust deadlock detection for the local lock manager. In particular it will detect deadlocks caused by a lock request that covers more than one blocking request. As required by the NLM protocol, all deadlock detection happens synchronously - a user is guaranteed that if a lock request isn't rejected immediately, the lock will eventually be granted. The old system allowed for a 'deferred deadlock' condition where a blocked lock request could wake up and find that some other deadlock-causing lock owner had beaten them to the lock. * Since both local and remote locks are managed by the same kernel locking code, local and remote processes can safely use file locks for mutual exclusion. Local processes have no fairness advantage compared to remote processes when contending to lock a region that has just been unlocked - the local lock manager enforces a strict first-come first-served model for both local and remote lockers. Sponsored by: Isilon Systems PR: 95247 107555 115524 116679 MFC after: 2 weeks Revision Changes Path 1.9 +1 -1 src/lib/libc/gen/lockf.c 1.12 +1 -0 src/lib/libc/sys/Symbol.map 1.47 +7 -2 src/lib/libc/sys/fcntl.2 1.99 +2 -1 src/sys/compat/freebsd32/syscalls.master 1.110 +2 -0 src/sys/compat/linux/linux_file.c 1.45 +2 -2 src/sys/compat/svr4/svr4_fcntl.c 1.1477 +1 -0 src/sys/conf/NOTES 1.1284 +32 -0 src/sys/conf/files 1.622 +2 -0 src/sys/conf/options 1.29 +20 -0 src/sys/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c 1.184 +18 -0 src/sys/fs/msdosfs/msdosfs_vnops.c 1.17 +15 -0 src/sys/fs/tmpfs/tmpfs_vnops.c 1.29 +2 -1 src/sys/i386/ibcs2/ibcs2_fcntl.c 1.329 +66 -7 src/sys/kern/kern_descrip.c 1.60 +1894 -471 src/sys/kern/kern_lockf.c 1.241 +2 -1 src/sys/kern/syscalls.master 1.91 +13 -0 src/sys/kern/vnode_if.src 1.43 +18 -0 src/sys/nfs4client/nfs4_vnops.c 1.46 +1 -0 src/sys/nfsclient/nfs_lock.c 1.283 +23 -0 src/sys/nfsclient/nfs_vnops.c 1.1 +119 -0 src/sys/nlm/nlm.h (new) 1.1 +448 -0 src/sys/nlm/nlm_prot.h (new) 1.1 +372 -0 src/sys/nlm/nlm_prot_clnt.c (new) 1.1 +1783 -0 src/sys/nlm/nlm_prot_impl.c (new) 1.1 +762 -0 src/sys/nlm/nlm_prot_server.c (new) 1.1 +509 -0 src/sys/nlm/nlm_prot_svc.c (new) 1.1 +454 -0 src/sys/nlm/nlm_prot_xdr.c (new) 1.1 +112 -0 src/sys/nlm/sm_inter.h (new) 1.1 +107 -0 src/sys/nlm/sm_inter_xdr.c (new) 1.1 +361 -0 src/sys/rpc/auth.h (new) 1.1 +148 -0 src/sys/rpc/auth_none.c (new) 1.1 +299 -0 src/sys/rpc/auth_unix.c (new) 1.1 +122 -0 src/sys/rpc/authunix_prot.c (new) 1.1 +620 -0 src/sys/rpc/clnt.h (new) 1.1 +865 -0 src/sys/rpc/clnt_dg.c (new) 1.1 +307 -0 src/sys/rpc/clnt_rc.c (new) 1.1 +83 -0 src/sys/rpc/clnt_stat.h (new) 1.1 +827 -0 src/sys/rpc/clnt_vc.c (new) 1.1 +138 -0 src/sys/rpc/getnetconfig.c (new) 1.1 +187 -0 src/sys/rpc/inet_ntop.c (new) 1.1 +224 -0 src/sys/rpc/inet_pton.c (new) 1.1 +99 -0 src/sys/rpc/netconfig.h (new) 1.1 +68 -0 src/sys/rpc/nettype.h (new) 1.1 +107 -0 src/sys/rpc/pmap_prot.h (new) 1.1 +125 -0 src/sys/rpc/rpc.h (new) 1.1 +200 -0 src/sys/rpc/rpc_callmsg.c (new) 1.1 +126 -0 src/sys/rpc/rpc_com.h (new) 1.1 +716 -0 src/sys/rpc/rpc_generic.c (new) 1.1 +214 -0 src/sys/rpc/rpc_msg.h (new) 1.1 +348 -0 src/sys/rpc/rpc_prot.c (new) 1.1 +1382 -0 src/sys/rpc/rpcb_clnt.c (new) 1.1 +89 -0 src/sys/rpc/rpcb_clnt.h (new) 1.1 +244 -0 src/sys/rpc/rpcb_prot.c (new) 1.1 +579 -0 src/sys/rpc/rpcb_prot.h (new) 1.1 +574 -0 src/sys/rpc/svc.c (new) 1.1 +614 -0 src/sys/rpc/svc.h (new) 1.1 +133 -0 src/sys/rpc/svc_auth.c (new) 1.1 +67 -0 src/sys/rpc/svc_auth.h (new) 1.1 +144 -0 src/sys/rpc/svc_auth_unix.c (new) 1.1 +334 -0 src/sys/rpc/svc_dg.c (new) 1.1 +407 -0 src/sys/rpc/svc_generic.c (new) 1.1 +746 -0 src/sys/rpc/svc_vc.c (new) 1.13 +17 -7 src/sys/rpc/types.h 1.1 +368 -0 src/sys/rpc/xdr.h (new) 1.19 +23 -3 src/sys/sys/fcntl.h 1.21 +70 -18 src/sys/sys/lockf.h 1.296 +21 -0 src/sys/ufs/ufs/ufs_vnops.c 1.1 +816 -0 src/sys/xdr/xdr.c (new) 1.1 +155 -0 src/sys/xdr/xdr_array.c (new) 1.1 +238 -0 src/sys/xdr/xdr_mbuf.c (new) 1.1 +232 -0 src/sys/xdr/xdr_mem.c (new) 1.1 +135 -0 src/sys/xdr/xdr_reference.c (new) 1.1 +162 -0 src/sys/xdr/xdr_sizeof.c (new) 1.383 +1 -0 src/usr.sbin/Makefile 1.1 +8 -0 src/usr.sbin/clear_locks/Makefile (new) 1.1 +51 -0 src/usr.sbin/clear_locks/clear_locks.8 (new) 1.1 +70 -0 src/usr.sbin/clear_locks/clear_locks.c (new) 1.23 +251 -17 src/usr.sbin/rpc.lockd/lockd.c 1.19 +6 -0 src/usr.sbin/rpc.lockd/rpc.lockd.8 _______________________________________________ cvs-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/cvs-all To unsubscribe, send any mail to "cvs-all-unsubscribe@freebsd.org"
State Changed From-To: open->closed Fixed in the new NFSLOCKD code which is available in: RELENG_6 after __FreeBSD_version 603102 RELENG_7 after __FreeBSD_version 700103 current after __FreeBSD_version 800028 To use the new NFS locking server, either add the NFSLOCKD kernel option or build the nfslockd.ko and krpc.ko kernel modules.