Bug 221445 - The absence of the accept_rtadv option causes an error "ping6: sendmsg: No buffer space available"
Summary: The absence of the accept_rtadv option causes an error "ping6: sendmsg: No bu...
Status: New
Alias: None
Product: Base System
Classification: Unclassified
Component: bin (show other bugs)
Version: 11.1-STABLE
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-net mailing list
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-08-12 14:55 UTC by Vladislav V. Prodan
Modified: 2020-02-28 02:43 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Vladislav V. Prodan 2017-08-12 14:55:29 UTC
I have a server in Kimsufi (OVH)

Ping on ipv6 causes an error:
# ping6 -c 5 ya.ru
PING6(56=40+8+8 bytes) 2001:41d0:e:XXX::1 --> 2a02:6b8::2:242
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1

--- ya.ru ping6 statistics ---
5 packets transmitted, 0 packets received, 100.0% packet loss

# uname -a
FreeBSD tank1.storage.com 11.1-STABLE FreeBSD 11.1-STABLE #0 r322164: Mon Aug  7 15:33:19 UTC 2017     root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

cat /etc/rc.conf:

...
ifconfig_em0="DHCP"

ipv6_default_interface="em0"
ifconfig_em0_ipv6="inet6 2001:41d0:e:XXX::1/128"

ipv6_static_routes="ipv6gw"
ipv6_route_ipv6gw="-host 2001:41d0:000e:03ff:ff:ff:ff:ff -iface em0"

ipv6_defaultrouter="2001:41d0:000e:03ff:ff:ff:ff:ff"
ipv6_activate_all_interfaces="YES"
...

# kldstat
Id Refs Address            Size     Name
 1   16 0xffffffff80200000 1f78608  kernel
 2    1 0xffffffff8217a000 31f830   zfs.ko
 3    2 0xffffffff8249a000 cba0     opensolaris.ko
 4    1 0xffffffff82621000 2357f    ipfw.ko
 5    1 0xffffffff82645000 9a67     if_bridge.ko
 6    1 0xffffffff8264f000 5e60     bridgestp.ko

# netstat -m
1024/2531/3555 mbufs in use (current/cache/total)
1023/1263/2286/250830 mbuf clusters in use (current/cache/total/max)
1023/1254 mbuf+clusters out of packet secondary zone in use (current/cache)
0/3/3/125415 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/37160 9k jumbo clusters in use (current/cache/total/max)
0/0/0/20902 16k jumbo clusters in use (current/cache/total/max)
2302K/3170K/5472K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed


# ifconfig em0
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=4219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC,VLAN_HWTSO>
        ether 00:22:4d:ad:ff:dc
        hwaddr 00:22:4d:ad:ff:dc
        inet6 2001:41d0:e:XXX::1 prefixlen 128
        inet6 fe80::222:4dff:fead:ffdc%em0 prefixlen 64 scopeid 0x1
        inet 5.196.YY.ZZ netmask 0xffffff00 broadcast 5.196.YY.255
        nd6 options=8021<PERFORMNUD,AUTO_LINKLOCAL,DEFAULTIF>
        media: Ethernet autoselect (100baseTX <full-duplex>)
        status: active

After the command "ifconfig em0 inet6 accept_rtadv 2001:41d0:e:XXX::1/128", connectivity appears on Ipv6 and ping starts working.
Comment 1 Andrey V. Elsukov freebsd_committer 2017-08-12 18:36:40 UTC
ENOBUFS usually means that ND layer can not resolve L2 address. And this is because you incorrectly configured your IPv6.
Comment 2 Vladislav V. Prodan 2017-08-12 19:24:18 UTC
(In reply to Andrey V. Elsukov from comment #1)

Read the description of the bug again.
I described how I managed to get the ipv6 to work.
Comment 3 Andrey V. Elsukov freebsd_committer 2017-08-12 19:48:59 UTC
(In reply to Vladislav V. Prodan from comment #2)
> Read the description of the bug again.
> I described how I managed to get the ipv6 to work.

It is wrong, and this is why it doesn't work.

> ifconfig_em0_ipv6="inet6 2001:41d0:e:XXX::1/128"

What is the reason of /128 prefix length?

> ipv6_static_routes="ipv6gw"
> ipv6_route_ipv6gw="-host 2001:41d0:000e:03ff:ff:ff:ff:ff -iface em0"

This will not work on FreeBSD due to implementation specificity. 

> ipv6_defaultrouter="2001:41d0:000e:03ff:ff:ff:ff:ff"

When the kernel is going to send IPv6 packet, it does L2 lookup to determine L2 address. ND6 code does lookup for destination address only when an address is considered as neighbor. When you have not configured the correct prefix, the ND6 has not any interfaces where the destination address can be considered as neighbor. In your case even the L2 address of default router can not be found.
Comment 4 Vladislav V. Prodan 2017-08-12 20:19:19 UTC
(In reply to Andrey V. Elsukov from comment #3)

>> Read the description of the bug again.
>> I described how I managed to get the ipv6 to work.

>It is wrong, and this is why it doesn't work.

It's someone who did not quite correctly implement the IPv6 implementation in FreeBSD.
I can give a working example for Debian, there is the same crutch in the destination route for the default router.

>> ifconfig_em0_ipv6="inet6 2001:41d0:e:XXX::1/128"

>What is the reason of /128 prefix length?

So decided marketers in the OVH. I already pointed out to them that there are a number of vulnerabilities related to the substitution of IPv6 addresses.
Personally, I only need one ipv6 address.

>> ipv6_static_routes="ipv6gw"
>> ipv6_route_ipv6gw="-host 2001:41d0:000e:03ff:ff:ff:ff:ff -iface em0"

>This will not work on FreeBSD due to implementation specificity. 

Why not? It works the same.

# netstat -rn6
Routing tables

Internet6:
Destination                       Gateway                       Flags     Netif Expire
::/96                             ::1                           UGRS        lo0
default                           fe80::12bd:18ff:fee5:ff80%em0 UG          em0
::1                               link#2                        UH          lo0
::ffff:0.0.0.0/96                 ::1                           UGRS        lo0
2001:41d0:e:XXX::/56              link#1                        U           em0
2001:41d0:e:XXX::1                link#1                        UHS         lo0
...
2001:41d0:e:3ff:ff:ff:ff:ff       00:22:4d:ad:ff:dc             UHS         em0
...
fe80::/10                         ::1                           UGRS        lo0
fe80::%em0/64                     link#1                        U           em0
fe80::222:4dff:fead:ffdc%em0      link#1                        UHS         lo0
fe80::%lo0/64                     link#2                        U           lo0
fe80::1%lo0                       link#2                        UHS         lo0
ff02::/16                         ::1                           UGRS        lo0


>When the kernel is going to send IPv6 packet, it does L2 lookup to determine L2 address. ND6 code does lookup for destination address only when an address is considered as neighbor. When you have not configured the correct prefix, the ND6 has not any interfaces where the destination address can be considered as neighbor. In your case even the L2 address of default router can not be found.

Honestly, I do not care how ND6 is implemented in FreeBSD.
I have issued a static IPv6 address with a given mask.
I have an IPv6 router address by default.
So what if he's not in the L2 segment? The network interface, where this Ipv6 is located, I know. Next, with these packages, the switches and routers of the hoster will be sorted out.

I have a typical network scheme for many hosters dedicated servers.
I have a test machine with which I can work for a while to correct this abnormal behavior.
If you have the time and the desire to eliminate this, in personal correspondence I will provide access to the server.
Comment 5 Vladislav V. Prodan 2020-02-28 02:37:53 UTC
A similar problem.

Hetzner data center.

The server is loaded into rescue mode with FreeBSD 12.0 x64.

[root@rescue ~]# uname -a
FreeBSD core.domain.com 12.0-RELEASE FreeBSD 12.0-RELEASE r341666 GENERIC  amd64

# I turn on ipv6
sysctl net.inet6.ip6.accept_rtadv=1
ifconfig igb0 inet6 2a01:4f8:241:XXXX::2/64
route -6 add default -iface igb0

# ping6 -c 4 ya.ru
PING6(56=40+8+8 bytes) 2a01:4f8:241:XXXX::2 --> 2a02:6b8::2:242
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1
ping6: sendmsg: No buffer space available
ping6: wrote ya.ru 16 chars, ret=-1

--- ya.ru ping6 statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss



[root@rescue ~]# ifconfig
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=e507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 00:1e:67:b4:f9:e7
        inet 116.202.XXX.XXX netmask 0xffffffc0 broadcast 116.202.XXX.XXX
        inet6 2a01:4f8:241:XXXX::2 prefixlen 64
        inet6 fe80::21e:67ff:feb4:f9e7%igb0 prefixlen 64 scopeid 0x1
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>

[root@rescue ~]# netstat -rn6
Routing tables

Internet6:
Destination                       Gateway                       Flags     Netif Expire
::/96                             ::1                           UGRS        lo0
default                           00:1e:67:b4:f9:e7             US         igb0
::1                               link#2                        UH          lo0
::ffff:0.0.0.0/96                 ::1                           UGRS        lo0
2a01:4f8:241:XXXX::/64            link#1                        U          igb0
2a01:4f8:241:XXXX::2              link#1                        UHS         lo0
fe80::/10                         ::1                           UGRS        lo0
fe80::%igb0/64                    link#1                        U          igb0
fe80::21e:67ff:feb4:f9e7%igb0     link#1                        UHS         lo0
fe80::%lo0/64                     link#2                        U           lo0
fe80::1%lo0                       link#2                        UHS         lo0
ff02::/16                         ::1                           UGRS        lo0
Comment 6 Vladislav V. Prodan 2020-02-28 02:38:52 UTC
(In reply to Vladislav V. Prodan from comment #5)

[root@rescue ~]# kldstat
Id Refs Address                Size Name
 1   14 0xffffffff80200000  243cd00 kernel
 2    1 0xffffffff89e21000     81f0 tmpfs.ko
 3    1 0xffffffff89e2a000   247e20 zfs.ko
 4    1 0xffffffff8a072000     7628 opensolaris.ko
 5    1 0xffffffff8a07a000     1fd4 geom_nop.ko

[root@rescue ~]# netstat -m
4196/4669/8865 mbufs in use (current/cache/total)
4103/2671/6774/4000000 mbuf clusters in use (current/cache/total/max)
7/2017 mbuf+clusters out of packet secondary zone in use (current/cache)
0/17/17/1004726 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/297696 9k jumbo clusters in use (current/cache/total/max)
0/0/0/167454 16k jumbo clusters in use (current/cache/total/max)
9255K/6577K/15832K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 sendfile syscalls
0 sendfile syscalls completed without I/O request
0 requests for I/O initiated by sendfile
0 pages read by sendfile as part of a request
0 pages were valid at time of a sendfile request
0 pages were valid and substituted to bogus page
0 pages were requested for read ahead by applications
0 pages were read ahead by sendfile
0 times sendfile encountered an already busy page
0 requests for sfbufs denied
0 requests for sfbufs delayed
Comment 7 Vladislav V. Prodan 2020-02-28 02:43:15 UTC
(In reply to Vladislav V. Prodan from comment #6)


I apologize, it was necessary to assign the gateway for ipv6 correctly.

route -6 change default fe80::1%igb0