Bug 233683 - IPv6 ND neighbor solicitation messages fail to arrive
Summary: IPv6 ND neighbor solicitation messages fail to arrive
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 12.0-RELEASE
Hardware: amd64 Any
: --- Affects Only Me
Assignee: freebsd-net (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2018-12-01 11:38 UTC by Philip Homburg
Modified: 2024-09-03 16:45 UTC (History)
9 users (show)

See Also:


Attachments
Possible fix (1.50 KB, patch)
2023-07-19 14:07 UTC, Kristof Provost
no flags Details | Diff
Test case (1.66 KB, patch)
2023-07-19 15:10 UTC, Kristof Provost
no flags Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Philip Homburg 2018-12-01 11:38:26 UTC
On 12.0-RC2 (and earlier) after a while, IPv6 ND neighbor solicitation messages fail to arrive. I guess it is a bug in multicast filtering. But I have no way to find out.

Result is loss of IPv6 connectivity.
Comment 1 Bjoern A. Zeeb freebsd_committer freebsd_triage 2018-12-01 18:37:11 UTC
How do you determine they fail to arrive?

Do you have a trace from other hosts?  No filters in the network?

In these cases, which NIC/driver are you using?  What happens if you turn on PROMISC/MONITOR mode on the interface?
Comment 2 Philip Homburg 2018-12-01 18:46:43 UTC
I have a setup with a tp-link running openwrt as router, as small switch behind the wired ethernet ports of the tp-link and then a few freebsd hosts.

One recently built ryzen2 system has an re0 interface. This system works perfectly fine under 11.2 and fails under 12.0 betas and RC1/RC2. With boot environments I can easily switch between the two versions.

When I reported this bug, I just upgraded a Thinkpad x201 (em0 interface) to RC2.
The thinkpad didn't report any neighbor solicitations even with tcpdump. At the same time, on the tplink they were visible in tcpdump.

Also a ping6 from the ryzen2 system (running 11.2) to the thinkpad failed.
Comment 3 Paul Webster 2022-02-26 19:13:23 UTC
I also appear to be suffering from this same problem.

I have a /64 assigned to my host (which works perfectly fine) as well as three bridges:


bridge102: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:e2
        inet 192.168.102.1 netmask 0xffffff00 broadcast 192.168.102.255
        inet6 fe80::5a9c:fcff:fe10:ffe2%bridge102 prefixlen 64 scopeid 0x3
        inet6 2a01:4f8:190:1183::102:1 prefixlen 64
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        groups: bridge
        nd6 options=63<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,NO_RADR>
bridge103: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:f5
        inet 192.168.103.1 netmask 0xffffff00 broadcast 192.168.103.255
        inet6 fe80::5a9c:fcff:fe10:fff5%bridge103 prefixlen 64 scopeid 0x4
        inet6 2a01:4f8:190:1183::103:1 prefixlen 64
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap1033 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 9 priority 128 path cost 2000000
        member: tap1032 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 8 priority 128 path cost 2000000
        member: tap1031 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000000
        member: tap1030 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000000
        groups: bridge
        nd6 options=63<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,NO_RADR>
bridge104: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:00:05:61
        inet 192.168.104.1 netmask 0xffffff00 broadcast 192.168.104.255
        inet6 fe80::5a9c:fcff:fe00:561%bridge104 prefixlen 64 scopeid 0x5
        inet6 2a01:4f8:190:1183::104:1 prefixlen 64
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        groups: bridge
        nd6 options=63<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,NO_RADR>


Any vm though, that is attached in this case to bridge103; 
        inet6 2a01:4f8:190:1183::103:1 prefixlen 64


cannot even ping '2a01:4f8:190:1183::103:1'; given these commands in the vm:
  ifconfig vtnet0 inet6 2a01:4f8:190:1183::103:2 prefixlen 64

It should be able to ping 2a01:4f8:190:1183::103:1 irregardless, however.
  root@sitehost:/var # ping6 2a01:4f8:190:1183::103:1
  PING6(56=40+8+8 bytes) 2a01:4f8:190:1183::103:2 --> 2a01:4f8:190:1183::103:1

And the host:
root@de1:/usr/venv/bhyve/init # tcpdump -vvi bridge103 icmp6
tcpdump: listening on bridge103, link-type EN10MB (Ethernet), capture size 262144 bytes
20:11:53.073960 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 32) 2a01:4f8:190:1183::103:2 > ff02::1:ff03:1: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2a01:4f8:190:1183::103:1
          source link-address option (1), length 8 (1): 00:d3:4d:be:3f:ab
            0x0000:  00d3 4dbe 3fab
20:11:54.120586 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 32) 2a01:4f8:190:1183::103:2 > ff02::1:ff03:1: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2a01:4f8:190:1183::103:1
          source link-address option (1), length 8 (1): 00:d3:4d:be:3f:ab
            0x0000:  00d3 4dbe 3fab
20:11:56.214303 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 32) 2a01:4f8:190:1183::103:2 > ff02::1:ff03:1: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2a01:4f8:190:1183::103:1
          source link-address option (1), length 8 (1): 00:d3:4d:be:3f:ab
            0x0000:  00d3 4dbe 3fab


It is plain to see that the bridge device has no idea how to answer, because it simply does not know who 2a01:4f8:190:1183::103:2 is, despite it being a bridge member.
Comment 4 Paul Webster 2022-02-26 19:19:49 UTC
Just to further add to the previous post, IPv4 and all other bridging is operational, I have kept back information I felt irrellevent:

root@sitehost:/var # ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=118 time=5.425 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=118 time=5.222 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=118 time=5.242 ms


the rc.conf for the host is as such:

# Base services config
zfs_enable="YES"
sshd_enable="YES"
sendmail_enable="NONE"
syslogd_flags="-ss"

ntpd_enable="YES"
ntpd_sync_on_start="YES"
ntpd_config="/etc/ntp.conf"

# Jails
jail_enable="YES"
jail_list="poudriere webserver"

# Network related
hostname="de1.paulwebster.org"
gateway_enable="YES"
ipv6_gateway_enable="YES"

## IPv4, hetzner classic config
ifconfig_em0="inet 5.9.137.144/32"
ifconfig_em0_ipv6="inet6 2a01:4f8:190:1183::1:1/64 auto_linklocal"
gateway_if="em0"
gateway_ip="5.9.137.129"
static_routes="gateway default"
route_gateway="-host $gateway_ip -interface $gateway_if"
route_default="default $gateway_ip"

## IPv6, Meena attributed
ipv6_cpe_wanif="em0"
ipv6_defaultrouter="fe80::1%em0"
ipv6_activate_all_interfaces="YES"

## Virtual interfaces
cloned_interfaces="bridge102 bridge103 bridge104"
ifconfig_bridge102="inet 192.168.102.1/24"
ifconfig_bridge102_ipv6="inet6 2a01:4f8:190:1183::102:1/64 auto_linklocal accept_rtadv"
ifconfig_bridge103="inet 192.168.103.1/24"
ifconfig_bridge103_ipv6="inet6 2a01:4f8:190:1183::103:1/64 auto_linklocal accept_rtadv"
ifconfig_bridge104="inet 192.168.104.1/24"
ifconfig_bridge104_ipv6="inet6 2a01:4f8:190:1183::104:1/64 auto_linklocal accept_rtadv"


the rc.conf for the VM is as such:

hostname="sitehost"
ifconfig_vtnet0="DHCP"
local_unbound_enable="YES"
sshd_enable="YES"
ntpdate_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
sendmail_enable="NONE"
nginx_enable="YES"

## IPv6, Meena attributed
ipv6_cpe_wanif="vtnet0"
ipv6_defaultrouter="fe80::1%vtnet0"
ipv6_activate_all_interfaces="YES"


the ifconfig of the host is as such:

root@de1:~ # ifconfig
em0: flags=8863<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=481249b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,LRO,WOL_MAGIC,VLAN_HWFILTER,NOMAP>
        ether 90:1b:0e:ab:a5:85
        inet 5.9.137.144 netmask 0xffffffff broadcast 5.9.137.144
        inet6 2a01:4f8:190:1183::1:1 prefixlen 64
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
bridge102: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:e2
        inet 192.168.102.1 netmask 0xffffff00 broadcast 192.168.102.255
        inet6 fe80::5a9c:fcff:fe10:ffe2%bridge102 prefixlen 64 scopeid 0x3
        inet6 2a01:4f8:190:1183::102:1 prefixlen 64
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        groups: bridge
        nd6 options=63<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,NO_RADR>
bridge103: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:10:ff:f5
        inet 192.168.103.1 netmask 0xffffff00 broadcast 192.168.103.255
        inet6 fe80::5a9c:fcff:fe10:fff5%bridge103 prefixlen 64 scopeid 0x4
        inet6 2a01:4f8:190:1183::103:1 prefixlen 64
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap1033 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 9 priority 128 path cost 2000000
        member: tap1032 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 8 priority 128 path cost 2000000
        member: tap1031 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000000
        member: tap1030 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000000
        groups: bridge
        nd6 options=63<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,NO_RADR>
bridge104: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 58:9c:fc:00:05:61
        inet 192.168.104.1 netmask 0xffffff00 broadcast 192.168.104.255
        inet6 fe80::5a9c:fcff:fe00:561%bridge104 prefixlen 64 scopeid 0x5
        inet6 2a01:4f8:190:1183::104:1 prefixlen 64
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        groups: bridge
        nd6 options=63<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL,NO_RADR>
Comment 5 Paul Webster 2022-02-26 19:24:26 UTC
I forgot sysctl.conf for the host:

root@de1:~ # cat /etc/sysctl.conf
# $FreeBSD$
#
#  This file is read when going to multi-user and its contents piped thru
#  ``sysctl'' to adjust kernel values.  ``man 5 sysctl.conf'' for details.
#

# Uncomment this to prevent users from seeing information about processes that
# are being run under another UID.
#security.bsd.see_other_uids=0
net.inet.ip.forwarding=1                   # Enable IP forwarding between interfaces
net.link.bridge.pfil_onlyip=0              # Only pass IP packets when pfil is enabled
net.link.bridge.pfil_bridge=0              # Packet filter on the bridge interface
net.link.bridge.pfil_member=0              # Packet filter on the member interface
security.bsd.unprivileged_read_msgbuf=0

# Random ip id's
net.inet.ip.random_id=1

# No idea something to do with ipv6
net.inet6.ip6.rfc6204w3=1
Comment 6 Paul Webster 2022-02-26 19:38:24 UTC
Also: FreeBSD de1.paulwebster.org 13.0-STABLE FreeBSD 13.0-STABLE #2 stable/13-n249446-6488ea00aba4: Sun Feb 13 07:54:21 CET 2022     root@de1.paulwebster.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC  amd64
Comment 7 Bjoern A. Zeeb freebsd_committer freebsd_triage 2022-02-27 00:21:17 UTC
(In reply to Paul Webster from comment #6)

What does

ping6 -n ff02::1%<ifname>
and
ping6 -n ff02::2%<ifname>

Say?  Can you try this on all and each interfaces of all FreeBSD host or guests?

If any of them return anything (which is not just the one that is themselves for the first one) please let us know which type of interface and where in the hierarchy.

Also on FreeBSD, what do

ndp -rn
and
ndp -pn

say?  They may have historic information about "expired".


That all said, bridges used to be special wen it came to IPv6;  I don't know if this was changed in more recent times.  kp@ might know.


That more said, I do not know how your em0 and the three bridges (of which only one seems to have members) are connected to each other given you put it all on the same (external) subnet?
Comment 8 Paul Webster 2022-02-27 02:31:52 UTC
(In reply to Bjoern A. Zeeb from comment #7)
HOST:

# ifconfig | grep inet6
        inet6 2a01:4f8:190:1183::1:1 prefixlen 64
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet6 fe80::5a9c:fcff:fe10:ffe2%bridge102 prefixlen 64 scopeid 0x3
        inet6 2a01:4f8:190:1183::102:1 prefixlen 64
        inet6 fe80::5a9c:fcff:fe10:fff5%bridge103 prefixlen 64 scopeid 0x4
        inet6 2a01:4f8:190:1183::103:1 prefixlen 64
        inet6 fe80::5a9c:fcff:fe00:561%bridge104 prefixlen 64 scopeid 0x5
        inet6 2a01:4f8:190:1183::104:1 prefixlen 64
        inet6 fe80::5a9c:fcff:fe10:ffea%tap1031 prefixlen 64 scopeid 0xb
        inet6 fe80::921b:eff:feab:a585%tun50 prefixlen 64 scopeid 0x7

PRIMARY INTERFACE:
    
    # ping6 -c3 -n ff02::1%em0
    PING6(56=40+8+8 bytes) 2a01:4f8:190:1183::1:1 --> ff02::1%em0
    16 bytes from 2a01:4f8:190:1183::1:1, icmp_seq=0 hlim=64 time=0.089 ms
    16 bytes from fe80::1%em0, icmp_seq=0 hlim=64 time=2.475 ms(DUP!)
    16 bytes from 2a01:4f8:190:1183::1:1, icmp_seq=1 hlim=64 time=0.095 ms
    16 bytes from fe80::1%em0, icmp_seq=1 hlim=64 time=0.643 ms(DUP!)
    16 bytes from 2a01:4f8:190:1183::1:1, icmp_seq=2 hlim=64 time=0.029 ms


    # ping6 -c3 -n ff02::2%em0
    PING6(56=40+8+8 bytes) 2a01:4f8:190:1183::1:1 --> ff02::2%em0
    16 bytes from fe80::1%em0, icmp_seq=0 hlim=64 time=0.667 ms
    16 bytes from fe80::1%em0, icmp_seq=1 hlim=64 time=0.571 ms
    16 bytes from fe80::1%em0, icmp_seq=2 hlim=64 time=0.653 ms

    --- ff02::2%em0 ping6 statistics ---
    3 packets transmitted, 3 packets received, 0.0% packet loss
    round-trip min/avg/max/std-dev = 0.571/0.630/0.667/0.042 ms

Bridge sourced interfaces:

# ping6 -S2a01:4f8:190:1183::102:1 -c3 -n ff02::2%em0
    PING6(56=40+8+8 bytes) 2a01:4f8:190:1183::102:1 --> ff02::2%em0
    16 bytes from fe80::1%em0, icmp_seq=0 hlim=64 time=0.614 ms
    16 bytes from fe80::1%em0, icmp_seq=1 hlim=64 time=0.487 ms
    16 bytes from fe80::1%em0, icmp_seq=2 hlim=64 time=0.544 ms

    --- ff02::2%em0 ping6 statistics ---
    3 packets transmitted, 3 packets received, 0.0% packet loss
    round-trip min/avg/max/std-dev = 0.487/0.549/0.614/0.052 ms


# ping6 -S2a01:4f8:190:1183::103:1 -c3 -n ff02::2%em0
    PING6(56=40+8+8 bytes) 2a01:4f8:190:1183::103:1 --> ff02::2%em0
    16 bytes from fe80::1%em0, icmp_seq=0 hlim=64 time=0.644 ms
    16 bytes from fe80::1%em0, icmp_seq=1 hlim=64 time=1.802 ms
    16 bytes from fe80::1%em0, icmp_seq=2 hlim=64 time=0.565 ms

    --- ff02::2%em0 ping6 statistics ---
    3 packets transmitted, 3 packets received, 0.0% packet loss
    round-trip min/avg/max/std-dev = 0.565/1.004/1.802/0.565 ms



# ping6 -S2a01:4f8:190:1183::103:1 -c3 -n ff02::2%em0
    PING6(56=40+8+8 bytes) 2a01:4f8:190:1183::103:1 --> ff02::2%em0
    16 bytes from fe80::1%em0, icmp_seq=0 hlim=64 time=0.714 ms
    16 bytes from fe80::1%em0, icmp_seq=1 hlim=64 time=0.674 ms
    16 bytes from fe80::1%em0, icmp_seq=2 hlim=64 time=0.578 ms

    --- ff02::2%em0 ping6 statistics ---
    3 packets transmitted, 3 packets received, 0.0% packet loss
    round-trip min/avg/max/std-dev = 0.578/0.655/0.714/0.057 ms


# VM - 1 only there is 4 but they all act the same and they are a mixture of windows and BSD so lets just stick to BSD for now.


root@sitehost:/var # netstat -rn6
    Routing tables

    Internet6:
    Destination                       Gateway                       Flags     Netif Expire
    ::/96                             ::1                           UGRS        lo0
    default                           fe80::1%vtnet0                UGS      vtnet0
    ::1                               link#2                        UHS         lo0
    ::ffff:0.0.0.0/96                 ::1                           UGRS        lo0
    2a01:4f8:190:1183::/64            link#1                        U        vtnet0
    2a01:4f8:190:1183::103:2          link#1                        UHS         lo0
    fe80::/10                         ::1                           UGRS        lo0
    fe80::%vtnet0/64                  link#1                        U        vtnet0
    fe80::2d3:4dff:febe:3fab%vtnet0   link#1                        UHS         lo0
    fe80::%lo0/64                     link#2                        U           lo0
    fe80::1%lo0                       link#2                        UHS         lo0
    ff02::/16                         ::1                           UGRS        lo0
    root@sitehost:/var #


root@sitehost:/var # ping6 -c3 -n ff02::1%vtnet0
    PING6(56=40+8+8 bytes) fe80::2d3:4dff:febe:3fab%vtnet0 --> ff02::1%vtnet0
    16 bytes from fe80::2d3:4dff:febe:3fab%vtnet0, icmp_seq=0 hlim=64 time=0.306 ms
    16 bytes from fe80::5a9c:fcff:fe10:fff5%vtnet0, icmp_seq=0 hlim=64 time=3.772 ms(DUP!)
    16 bytes from fe80::5a9c:fcff:fe10:ffea%vtnet0, icmp_seq=0 hlim=64 time=6.453 ms(DUP!)
    16 bytes from fe80::2d3:4dff:febe:3faa%vtnet0, icmp_seq=0 hlim=64 time=7.261 ms(DUP!)
    16 bytes from fe80::2d3:4dff:febe:3faf%vtnet0, icmp_seq=0 hlim=64 time=8.102 ms(DUP!)
    16 bytes from fe80::2d3:4dff:febe:3fab%vtnet0, icmp_seq=1 hlim=64 time=0.259 ms
    16 bytes from fe80::5a9c:fcff:fe10:fff5%vtnet0, icmp_seq=1 hlim=64 time=1.167 ms(DUP!)
    16 bytes from fe80::5a9c:fcff:fe10:ffea%vtnet0, icmp_seq=1 hlim=64 time=4.865 ms(DUP!)
    16 bytes from fe80::2d3:4dff:febe:3faf%vtnet0, icmp_seq=1 hlim=64 time=8.328 ms(DUP!)
    16 bytes from fe80::2d3:4dff:febe:3faa%vtnet0, icmp_seq=1 hlim=64 time=10.908 ms(DUP!)
    16 bytes from fe80::2d3:4dff:febe:3fab%vtnet0, icmp_seq=2 hlim=64 time=0.449 ms

    --- ff02::1%vtnet0 ping6 statistics ---
    3 packets transmitted, 3 packets received, +8 duplicates, 0.0% packet loss
    round-trip min/avg/max/std-dev = 0.259/4.715/10.908/3.610 ms
    root@sitehost:/var #


root@sitehost:~ # ping6 -c3 -n ff02::2%vtnet0
    PING6(56=40+8+8 bytes) fe80::2d3:4dff:febe:3fab%vtnet0 --> ff02::2%vtnet0

    --- ff02::2%vtnet0 ping6 statistics ---
    3 packets transmitted, 0 packets received, 100.0% packet loss
Comment 9 Paul Webster 2022-02-27 19:04:53 UTC
(In reply to Bjoern A. Zeeb from comment #7)
paul.webster@de1:~ $ sudo su
root@de1:/usr/home/paul.webster # ndp -rn
root@de1:/usr/home/paul.webster # ndp -pn
fe80::%tun50/64 if=tun50
flags=LAO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
fe80::%tap1031/64 if=tap1031
flags=LAO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
2a01:4f8:190:1183::/64 if=em0
flags=L vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
2a01:4f8:190:1183::/64 if=bridge104
flags=LO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
fe80::%bridge104/64 if=bridge104
flags=LAO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
2a01:4f8:190:1183::/64 if=bridge103
flags=L vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
fe80::%bridge103/64 if=bridge103
flags=LAO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
2a01:4f8:190:1183::/64 if=bridge102
flags=L vltime=43200, pltime=43200, expired, ref=1
  No advertising router
fe80::%bridge102/64 if=bridge102
flags=LAO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
fe80::%lo0/64 if=lo0
flags=LAO vltime=infinity, pltime=infinity, expire=Never, ref=1
  No advertising router
root@de1:/usr/home/paul.webster #
Comment 10 Kristof Provost freebsd_committer freebsd_triage 2022-02-28 10:23:48 UTC
(In reply to Bjoern A. Zeeb from comment #7)
> That all said, bridges used to be special wen it came to IPv6;  I don't know if this was changed in more recent times.  kp@ might know.

They're not really, but IPv6 depends heavily on multicast working, and that sometimes catches people out with if_bridge. If addresses are assigned to bridge members multicast will not work correctly for those addresses. IP(v6) addresses must always be assigned to the bridge itself and not to member interfaces.
Comment 11 Paul Webster 2022-03-01 18:05:44 UTC
(In reply to Kristof Provost from comment #10)
Hold a mo though, I do have actual inet6 addresses on my bridges and am still at a complete stand still with this particular issue.

Do you believe ng_bridge would help here or could this be a if_tap issue?
Comment 12 Kristof Provost freebsd_committer freebsd_triage 2023-07-19 11:53:17 UTC
(In reply to Paul Webster from comment #11)
This problem is not related to the bridge itself, and switching to ng_brigde will not affect it. Nor is it related to if_tap.
Comment 13 Kristof Provost freebsd_committer freebsd_triage 2023-07-19 11:54:23 UTC
Re-assigning an IPv6 address that’s already there seems to trigger this.

I’m still investigating, but I believe the problem is that ifconfig deletes the IP address before setting it again. The delete ends up calling in6_purgeaddr(), which marks the relevant multicast groups as no longer needed, but leaves their actual removal to mld_fasttimo() (i.e. asynchronously).
The IP address gets added again, and either the groups don’t get re-added or they don’t get marked as needed. This bit I’m still unclear on, but as I said: I’m still digging.
Comment 14 Franco Fichtner 2023-07-19 12:33:32 UTC
The behaviour of ifconfig was discussed here and deemed a "necessary evil" while clearly being a bug as it also strips routes while removing and readding the same address. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=128954#c5

The ironic part is that you may have stumbled on the actual bug that made the radvd port stop working after an indefinite amount of time for some people but was never reliably producible leading to multiple efforts by yourself and us.


Cheers,
Franco
Comment 15 Andrey V. Elsukov freebsd_committer freebsd_triage 2023-07-19 14:00:08 UTC
I suspect the problem could be related to some multicast filtering bug in NIC driver. E.g. recently there was this patch: https://reviews.freebsd.org/D40860

It looks most of Intel NICs have this limitation. And when you have many IPv6 addresses on the host, it is pretty easy overflow 128 entries. Usually when such problem happens, enabling PPROMISC mode should help, since kernel will receive all messages and filter them by self.

Also some drivers have ability to enable ALLMULTI mode, when NIC accepts all multicast packets. Using this patch https://people.freebsd.org/~ae/allmulti.diff you can enable this mode as workaround.
Comment 16 Kristof Provost freebsd_committer freebsd_triage 2023-07-19 14:07:16 UTC
(In reply to Andrey V. Elsukov from comment #15)
It's not hardware related. I can trivially reproduce the issue on if_epair.
There's long debugging saga in https://redmine.pfsense.org/issues/13423, but here's a short version:

> [kp@nut ~]$ sudo ifconfig epair create epair0a
> [kp@nut ~]$ sudo ifconfig epair0a inet6 2001:db8::1/64 up
> [kp@nut ~]$ ifmcstat -i epair0a
> epair0a:
> 	inet6 fe80::9e:e8ff:fe02:780a%epair0a scopeid 0xc
> 	mldv2 flags=2<USEALLOW> rv 2 qi 125 qri 10 uri 3
> 		group ff02::1:ff00:1%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:ff:00:00:01
> 		group ff01::1%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:00:00:00:01
> 		group ff02::2:6c69:386f%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:6c:69:38:6f
> 		group ff02::2:ff6c:6938%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:ff:6c:69:38
> 		group ff02::1%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:00:00:00:01
> 		group ff02::1:ff02:780a%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:ff:02:78:0a
> [kp@nut ~]$ sudo ifconfig epair0a inet6 2001:db8::1/64
> [kp@nut ~]$ ifmcstat -i epair0a
> epair0a:
> 	inet6 fe80::9e:e8ff:fe02:780a%epair0a scopeid 0xc
> 	mldv2 flags=2<USEALLOW> rv 2 qi 125 qri 10 uri 3
> 		group ff01::1%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:00:00:00:01
> 		group ff02::2:6c69:386f%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:6c:69:38:6f
> 		group ff02::2:ff6c:6938%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:ff:6c:69:38
> 		group ff02::1%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:00:00:00:01
> 		group ff02::1:ff02:780a%epair0a scopeid 0xc mode exclude
> 			mcast-macaddr 33:33:ff:02:78:0a

Note the lost subscription to the ff02::1:ff00:1%epair0a group.
Comment 17 Kristof Provost freebsd_committer freebsd_triage 2023-07-19 14:07:59 UTC
Created attachment 243486 [details]
Possible fix

The above appears to resolve it, but I'm not yet ready to say this is actually a fully correct fix.
Comment 18 Kristof Provost freebsd_committer freebsd_triage 2023-07-19 15:10:20 UTC
Created attachment 243488 [details]
Test case

Quick test case.
Comment 19 commit-hook freebsd_committer freebsd_triage 2023-07-24 15:46:11 UTC
A commit in branch main references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=9c9a76dc6873427b14f6c84397dd60ea8e529d8d

commit 9c9a76dc6873427b14f6c84397dd60ea8e529d8d
Author:     Kristof Provost <kp@FreeBSD.org>
AuthorDate: 2023-07-20 07:41:45 +0000
Commit:     Kristof Provost <kp@FreeBSD.org>
CommitDate: 2023-07-24 14:47:34 +0000

    mld: always commit state changes on leaving

    Resolve a race condition where we'd lose the Solicited-node multicast
    group subscription if we assigned the same IPv6 address twice.

    PR:             233683
    Reviewed by:    ae
    MFC after:      1 week
    Sponsored by:   Rubicon Communications, LLC ("Netgate")
    Differential Revision: https://reviews.freebsd.org/D41124

 sys/netinet6/mld6.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)
Comment 20 commit-hook freebsd_committer freebsd_triage 2023-07-24 15:46:14 UTC
A commit in branch main references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=b03012d0b600793d7501b4cc56757ec6150ec87f

commit b03012d0b600793d7501b4cc56757ec6150ec87f
Author:     Kristof Provost <kp@FreeBSD.org>
AuthorDate: 2023-07-19 14:37:28 +0000
Commit:     Kristof Provost <kp@FreeBSD.org>
CommitDate: 2023-07-24 14:47:50 +0000

    netinet6 tests: test for loss of Solicited-node multicast groups

    The multicast code has an issue where it can lose the Solicited-node
    multicast group subscription if the same address is added twice.

    Test for this.

    PR:             233683
    MFC after:      1 week
    Sponsored by:   Rubicon Communications, LLC ("Netgate")
    Differential Revision:  https://reviews.freebsd.org/D41123

 tests/sys/netinet6/mld.sh | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)
Comment 21 commit-hook freebsd_committer freebsd_triage 2023-07-31 12:46:26 UTC
A commit in branch stable/13 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=8763579a44e31713a6d1b0d9618eeab3eac9d868

commit 8763579a44e31713a6d1b0d9618eeab3eac9d868
Author:     Kristof Provost <kp@FreeBSD.org>
AuthorDate: 2023-07-19 14:37:28 +0000
Commit:     Kristof Provost <kp@FreeBSD.org>
CommitDate: 2023-07-31 12:43:14 +0000

    netinet6 tests: test for loss of Solicited-node multicast groups

    The multicast code has an issue where it can lose the Solicited-node
    multicast group subscription if the same address is added twice.

    Test for this.

    PR:             233683
    MFC after:      1 week
    Sponsored by:   Rubicon Communications, LLC ("Netgate")
    Differential Revision:  https://reviews.freebsd.org/D41123

    (cherry picked from commit b03012d0b600793d7501b4cc56757ec6150ec87f)

 tests/sys/netinet6/mld.sh | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)
Comment 22 commit-hook freebsd_committer freebsd_triage 2023-07-31 12:46:28 UTC
A commit in branch stable/13 references this bug:

URL: https://cgit.FreeBSD.org/src/commit/?id=abab84cfac60e7e5a78d43cc4b0cfdd49f361989

commit abab84cfac60e7e5a78d43cc4b0cfdd49f361989
Author:     Kristof Provost <kp@FreeBSD.org>
AuthorDate: 2023-07-20 07:41:45 +0000
Commit:     Kristof Provost <kp@FreeBSD.org>
CommitDate: 2023-07-31 12:43:14 +0000

    mld: always commit state changes on leaving

    Resolve a race condition where we'd lose the Solicited-node multicast
    group subscription if we assigned the same IPv6 address twice.

    PR:             233683
    Reviewed by:    ae
    MFC after:      1 week
    Sponsored by:   Rubicon Communications, LLC ("Netgate")
    Differential Revision: https://reviews.freebsd.org/D41124

    (cherry picked from commit 9c9a76dc6873427b14f6c84397dd60ea8e529d8d)

 sys/netinet6/mld6.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)
Comment 23 Mark Johnston freebsd_committer freebsd_triage 2024-08-30 14:37:58 UTC
Is there anything left to do for this bug?
Comment 24 Kristof Provost freebsd_committer freebsd_triage 2024-08-30 14:46:16 UTC
(In reply to Mark Johnston from comment #23)
Not as far as I know.