I got 2 fresh installed 13.1-RELEASE VMs in bhyve. fs (nfs server) admin: uid 1001 u1 : uid 1002 u2 : uid 1003 fc (nfs client) admin: uid 1001 u1 : uid 1003 u2 : uid 1002 I intentionally crossed the u1 and u2 uids on these two accounts to test the nfsuserd. Settings on the server ====================== admin@fs:~ # id u1 uid=1002(u1) gid=1002(u1) groups=1002(u1) admin@fs:~ # id u2 uid=1003(u2) gid=1003(u2) groups=1003(u2) admin@fs:~ $ hostname -f fs.me.local admin@fs:~ $ ls -la /zroot/nfsv4/test/ total 3 drwxr-xr-x 2 root wheel 5 Jun 7 17:46 . drwxr-xr-x 3 root wheel 3 Jun 7 17:46 .. -rw-r--r-- 1 admin wheel 0 Jun 7 17:46 hallo_admin -rw-r--r-- 1 u1 u1 0 Jun 7 17:46 hallo_u1 -rw-r--r-- 1 u2 u2 0 Jun 7 17:46 hallo_u2 admin@fs:~ $ cat /etc/rc.conf hostname="fs.me.local" keymap="de.kbd" ifconfig_vtnet0="DHCP" ifconfig_vtnet0_ipv6="inet6 accept_rtadv" sshd_enable="YES" # Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable dumpdev="NO" zfs_enable="YES" nfs_server_enable="YES" nfsv4_server_enable="YES" nfs_client_enable="YES" nfsuserd_enable="YES" nfsuserd_flags="-domain me.local -verbose" nfscbd_enable="YES" mountd_enable="YES" mountd_flags="-r" hostid_enable="YES" admin@fs:~ $ cat /etc/exports /zroot/nfsv4/test -alldirs -network 192.168.160.0/24 V4: /zroot/nfsv4 -sec=sys -network 192.168.160.0/24 sysctl.conf vfs.nfs.enable_uidtostring=1 vfs.nfsd.enable_stringtouid=0 Settings on the client ====================== root@fc:~ # id u1 uid=1003(u1) gid=1003(u1) groups=1003(u1) root@fc:~ # id u2 uid=1002(u2) gid=1002(u2) groups=1002(u2) root@fc:~ # hostname -f fc.me.local root@fc:~ # mount -t nfs -o nfsv4 192.168.160.66:/test /mnt root@fc:~ # nfsstat -m 192.168.160.66:/test on /mnt nfsv4,minorversion=2,tcp,resvport,nconnect=1,hard,cto,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=8388608,timeout=120,retrans=2147483647 root@fc:~ # ls -la /mnt/ total 11 drwxr-xr-x 2 root wheel 5 Jun 7 17:46 . drwxr-xr-x 19 root wheel 25 Jun 7 17:44 .. -rw-r--r-- 1 admin wheel 0 Jun 7 17:46 hallo_admin -rw-r--r-- 1 u2 u2 0 Jun 7 17:46 hallo_u1 -rw-r--r-- 1 u1 u1 0 Jun 7 17:46 hallo_u2 root@fc:~ # rc.conf: -------- hostname="fc.me.local" keymap="de.kbd" ifconfig_vtnet0="DHCP" ifconfig_vtnet0_ipv6="inet6 accept_rtadv" sshd_enable="YES" # Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable dumpdev="NO" zfs_enable="YES" nfs_client_enable="YES" nfsuserd_enable="YES" nfsuserd_flags="-domain me.local -verbose" nfscbd_enable="YES" rpc_lockd_enable="YES" rpc_statd_enable="YES" mountd_flags="-r" hostid_enable="YES" sysctl.conf vfs.nfs.enable_uidtostring=1 vfs.nfsd.enable_stringtouid=0 Test: ================================= On the client i mounted with: mount -t nfs -o nfsv4 fs:/test /mnt root@fc:/mnt # ls -la total 11 drwxr-xr-x 2 root wheel 5 Jun 7 17:46 . drwxr-xr-x 19 root wheel 25 Jun 8 05:50 .. -rw-r--r-- 1 admin wheel 0 Jun 7 17:46 hallo_admin -rw-r--r-- 1 u1 u1 0 Jun 7 17:46 hallo_u1 -rw-r--r-- 1 u2 u2 0 Jun 7 17:46 hallo_u2 This looks good! But the problem is: I created on the server in the /zroot/nfsv4/test a nuew subfolder "sub" with u+w,g+w,o+w. In this folder i try to create a new file. On the client: u1@fc:~ $ touch /mnt/sub/hello_feom_client_u1 u1@fc:~ $ ls -la /mnt/sub/ total 2 drwxr-xrwx 2 u2 u2 3 Jun 8 18:04 . drwxrwxr-x 3 u2 u2 6 Jun 8 06:07 .. -rw-r--r-- 1 u1 u2 0 Jun 8 18:04 hello_feom_client_u1 u1@fc:~ $ ls -ln /mnt/sub/ total 1 -rw-r--r-- 1 1003 1002 0 Jun 8 18:04 hello_feom_client_u1 And the server shows: admin@fs:~ $ sudo ls -la /zroot/nfsv4/test/sub/ total 3 drwxr-xrwx 2 u1 u1 3 Jun 8 18:04 . drwxrwxr-x 3 u1 u1 6 Jun 8 06:07 .. -rw-r--r-- 1 u2 u1 0 Jun 8 18:04 hello_feom_client_u1 admin@fs:~ $ sudo ls -ln /zroot/nfsv4/test/sub/ total 1 -rw-r--r-- 1 1003 1002 0 Jun 8 18:04 hello_feom_client_u1 Please see the mismatched uids! I touch the file as u1 on the client. The client u1 "ls -la" shows that the new file is from u2. And also on the server the new file is from u2. I dont run any NIS, LDAP or kerberos. Also the version used is NFS4.2. I tried to downgrade to 4.1 and 4.0 but the behaviour stays the same. I set the two sysctl variables explicit to use the names in the nfs protocol and not the uids that i explicit use nfsuserd and dont rely on the same uids on all clients,servers. I read that post from rick: https://forums.freebsd.org/threads/nfsv4-without-kerberos.71899/#post-436567 or better from the mailing list https://www.mail-archive.com/freebsd-stable@freebsd.org/msg139428.html So my question is: Is this a bug or just wrong configured?
Just FYI: I had a discussion running on the german bsdforen.de https://www.bsdforen.de/threads/nfsv4-usermapping.36543/
A few comments... - If you are using nfsuserd, both sysctls should be 0. vfs.nfs.enable_uidtostring=0 vfs.nfsd.enable_stringtouid=0 on the server. On the client, vfs.nfsd.stringtouid is not used. - Since you are using AUTH_SYS (sec=sys), then the credentials in the RPC requests header are numeric uids. That is the "user" doing the create and, therefore, that "uid" is going to be the owner. nfsuserd or "numbers in user/group strings" only affects the entries in Getattr/Setattr for Owner and OwnerGroup and does not affect the RPC request's user credentials in the RPC header. (The only time there are no numeric uids in the RPC request's credential is when Kerberized mounts are being used. For that case, the credential refers to a Kerberos principal, which is normally "user@REALM".) - In "man nfsv4" it states... Although uid/gid numbers are no longer used in the NFSv4 protocol except optionally in the above strings, they will still be in the RPC authentication fields when using AUTH_SYS (sec=sys), which is the default. As such, in this case both the user/group name and number spaces must be consistent between the client and server. To do otherwise, simply breaks things, as you have demonstrated.
With both sysctl variables to 0 the result stays the same. root@fc:~ # sysctl -a | grep nfs | grep uid vfs.nfs.enable_uidtostring: 0 vfs.nfsd.enable_stringtouid: 0 admin@fs:~ $ sysctl -a | grep nfs | grep uid vfs.nfs.enable_uidtostring: 0 vfs.nfsd.enable_stringtouid: 0 My goal was to map the different uids,gids of different machines but with the same usernames and the same domain with nfsuserd without the help of NIS, LDAP nor Kerberos. So may i ask: a) nfsuserd + Kerberos5 would achive the correct mapping of the uids/gids. b) nfsuserd without nis/ldap/kerberos cant do that. Is this right?
So may i ask: a) nfsuserd + Kerberos5 would achive the correct mapping of the uids/gids. b) nfsuserd without nis/ldap/kerberos cant do that. Is this right? Yes. If you do not want to have a uniform uid/gid space (ie. same numbers assigned to the names across all clients and server), the only way it can work is for Kerberized mounts. For Kerberized mounts, uids/gids never go on the wire. You still need a uniform user/group name, which is also consistent with the Kerberos user principal names (for the user names).
Oh, and if you choose to use Kerberized NFS mounts, the setup can be non-trivial. Hopefully this is helpful: https://people.freebsd.org/~rmacklem/nfs-krb5-setup.txt (It does not cover use of a Windows ADC KDC at this time, which is a whole different story.)
Described behaviour is what is expected, so I do not think there is anything to fix. Closing this was suggested by the reporter.