Since upgrading to 10.2-RELEASE I am experiencing regular (every few hours) kernel panics. From reading bug reports 128246 and 131038 I think it is not the same bug, especially since those 2 should have been patched already. Please let me know which information, if any, is needed from for example /var/crash/core.txt.0. This is not a mission critical system, so I am happy to try out patches or perform other tests. Some notes: - System is a Xeon E3-1220v3, Supermicro X10-something motherboard, 16GB ECC RAM. - I use a custom kernel with as only difference ROUTETABLES=6 - Main network interface is igb0 - bridge0 exists for tap0+igb0 for bhyve (using iohyve for setup) with a FreeBSD guest (which otherwise itself works fine). - I don't actually (explicitly) use ip6 for anything, nor have any specific ip6 rules in pf.conf Highlights: Fatal trap 12: page fault while in kernel mode cpuid = 3; apic id = 06 fault virtual address = 0x28 fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff809ff8db stack pointer = 0x28:0xfffffe0000270f90 frame pointer = 0x28:0xfffffe0000270fa0 code segment = base rx0, limit 0xfffff, type 0x1b = DPL 0, pres 1, long 1, def32 0, gran 1 processor eflags = interrupt enabled, resume, IOPL = 0 current process = 12 (irq269: igb0:que 3) trap number = 12 panic: page fault cpuid = 3 KDB: stack backtrace: #0 0xffffffff80984e30 at kdb_backtrace+0x60 #1 0xffffffff809489e6 at vpanic+0x126 #2 0xffffffff809488b3 at panic+0x43 #3 0xffffffff80d4aadb at trap_fatal+0x36b #4 0xffffffff80d4addd at trap_pfault+0x2ed #5 0xffffffff80d4a47a at trap+0x47a #6 0xffffffff80d307f2 at calltrap+0x8 #7 0xffffffff80989cbc at kvprintf+0xf9c #8 0xffffffff8098a71d at _vprintf+0x8d #9 0xffffffff80988a1c at log+0x5c #10 0xffffffff80b15f97 at ip6_forward+0x107 #11 0xffffffff8203460e at pf_refragment6+0x16e #12 0xffffffff820263b4 at pf_test6+0x1044 #13 0xffffffff8202e2cd at pf_check6_out+0x4d #14 0xffffffff80a18634 at pfil_run_hooks+0x84 #15 0xffffffff81d2e798 at bridge_pfil+0x218 #16 0xffffffff81d2f5be at bridge_broadcast+0xde #17 0xffffffff81d2f3ef at bridge_forward+0x20f With line numbers: #12 0xffffffff80b15f97 in ip6_forward (m=0xfffff8000bff8c00, srcrt=<value optimized out>) at /usr/src/sys/netinet6/ip6_forward.c:142 #13 0xffffffff8203460e in pf_refragment6 (ifp=<value optimized out>, m0=<value optimized out>, mtag=<value optimized out>) at /usr/src/sys/modules/pf/../../netpfil/pf/pf_norm.c:1158 #14 0xffffffff820263b4 in pf_test6 (dir=<value optimized out>, ifp=0xfffff8000f299000, m0=0xfffffe0000271608, inp=<value optimized out>) at /usr/src/sys/modules/pf/../../netpfil/pf/pf.c:6453 #15 0xffffffff8202e2cd in pf_check6_out (arg=<value optimized out>, m=0xfffffe0000271608, ifp=0xfffff8000f299000, dir=<value optimized out>, inp=0x0) at /usr/src/sys/modules/pf/../../netpfil/pf/pf_ioctl.c:3616 #16 0xffffffff80a18634 in pfil_run_hooks (ph=0xffffffff8168e6d0, mp=0xfffffe0000271720, ifp=0xfffff8000f299000, dir=2, inp=0x0) at /usr/src/sys/net/pfil.c:82 #17 0xffffffff81d2e798 in bridge_pfil (mp=0xfffffe0000271720, bifp=0xfffff8000f299000, ifp=0x0, dir=2) at /usr/src/sys/modules/if_bridge/../../net/if_bridge.c:3210 #18 0xffffffff81d2f5be in bridge_broadcast (sc=0xfffff80011b58800, src_if=0xfffff80007656000, m=0xfffff80020c52300, runfilt=1) at /usr/src/sys/modules/if_bridge/../../net/if_bridge.c:2456 #19 0xffffffff81d2f3ef in bridge_forward (sc=0xfffff80011b58800, sbif=<value optimized out>, m=0xfffff80020c52300) at /usr/src/sys/modules/if_bridge/../../net/if_bridge.c:2178 #20 0xffffffff81d2d93c in bridge_input (ifp=<value optimized out>, m=0xfffff8004a00d100) at /usr/src/sys/modules/if_bridge/../../net/if_bridge.c:2298 #21 0xffffffff80a0f77a in ether_nh_input (m=<value optimized out>) at /usr/src/sys/net/if_ethersubr.c:607 #22 0xffffffff80a177d2 in netisr_dispatch_src (proto=<value optimized out>, source=<value optimized out>, m=0x28) at /usr/src/sys/net/netisr.c:976 #23 0xffffffff804f715c in igb_rxeof (count=98) at /usr/src/sys/dev/e1000/if_igb.c:4808 #24 0xffffffff804f7801 in igb_msix_que (arg=0xfffff80007645b38) at /usr/src/sys/dev/e1000/if_igb.c:1621 #25 0xffffffff8091482b in intr_event_execute_handlers ( p=<value optimized out>, ie=0xfffff80007672400) at /usr/src/sys/kern/kern_intr.c:1264 #26 0xffffffff80914c76 in ithread_loop (arg=0xfffff8000767cac0) at /usr/src/sys/kern/kern_intr.c:1277 #27 0xffffffff8091244a in fork_exit ( callout=0xffffffff80914be0 <ithread_loop>, arg=0xfffff8000767cac0, frame=0xfffffe0000271ac0) at /usr/src/sys/kern/kern_fork.c:1018 #28 0xffffffff80d30d2e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:611 #29 0x0000000000000000 in ?? ()
It appears that m->m_pkthdr.rcvif is NULL, and the kernel panics in the if_name(m->m_pkthdr.rcvif) part of: log(LOG_DEBUG, "cannot forward " "from %s to %s nxt %d received on %s\n", ip6_sprintf(ip6bufs, &ip6->ip6_src), ip6_sprintf(ip6bufd, &ip6->ip6_dst), ip6->ip6_nxt, if_name(m->m_pkthdr.rcvif)); (kgdb) print m $2 = (struct mbuf *) 0xfffff80293e80900 (kgdb) print *m $3 = {m_hdr = {mh_next = 0xfffff80293e7d700, mh_nextpkt = 0x0, mh_data = 0xfffff80293e80968 "`", mh_len = 48, mh_type = 1, mh_flags = 16674}, M_dat = { MH = {MH_pkthdr = {rcvif = 0x0, tags = {slh_first = 0x0}, len = 1280, flowid = 0, csum_flags = 0, fibnum = 0, cosqos = 0 '\0', rsstype = 0 '\0', l2hlen = 0 '\0', l3hlen = 0 '\0', l4hlen = 0 '\0', l5hlen = 0 '\0', PH_per = {eigth = "\000\000\000\000\000\000\000", sixteen = {0, 0, 0, 0}, thirtytwo = {0, 0}, sixtyfour = {0}, unintptr = {0}, ptr = 0x0}, PH_loc = {eigth = "\000\000\000\000\000\000\000", sixteen = {0, 0, 0, 0}, thirtytwo = {0, 0}, sixtyfour = {0}, unintptr = {0}, ptr = 0x0}}, MH_dat = {MH_ext = {ref_cnt = 0x73cfb0000003333, ext_buf = 0x60dd861f776354 <Address 0x60dd861f776354 out of bounds>, ext_size = 96, ext_type = 4, ext_flags = 16723160, ext_free = 0x80fe, ext_arg1 = 0x1f7763feff54073e, ext_arg2 = 0x2ff}, Replacing the above log message with something safe shows that it does get called at roughly the same frequency as the previous panics, but without panicking. Of course I don't know if it is valid for rcvif to be NULL, or if some corruption occurs elsewhere. If anyone wants to debug this further I am happy to assist.
I've been seeing the same panic since upgrading from 10.1 to 10.2. The problem seems to be that the pf ipv6 reassembly/refragmentation code cannot properly cope with multicast packets but if_bridge needs to broadcast that sort of traffic while still passing it through the firewall. Here's what I think happens in detail: 1. One of the bridge members gets an ipv6 multicast packet which was reassembled by pf. 2. if_bridge broadcasts it to the other member(s). 3. if_brigde applies outbound filtering which refragments the packet (forwarding the reassembled packet could cause MTU issues on ipv6, so refragmentation is required). 4. Because multiple packets may result from this, pf injects all of them into the ipv6 stack using ip6_forward() instead of passing a single packet back to if_bridge for further processing. 5. ip6_forward() will refuse to handle multicast packets, because it was written for unicast traffic. 6. Because somewhere along the road (my guess is in the pf reassembly/refragment code) the mbuf->m_pkthdr.rcvif was lost, we see this panic when ip6_forward() is trying to log that it will not forward this multicast packet. Even if we fix the call to log() and/or make sure that the rcvif is always set, which will make the panic go away, ipv6 multicast will still not work together with pf scrubbing and if_bridge. My current workaround is to disable scrubbing on bridge members, because i don't really need it there. Another approach might be to disable it (or at least reassembly) just for ipv6 multicast traffic. Short-term solution: I think pf should be fixed to not reassemble ipv6 multicast traffic, as long as it's unable to reinject that kind of traffic properly after refragmentation. While calling ip6_forward() in the pf refragmention code seems ok for normal (forwarding) traffic, in the bridge case it does not make sense to me even for unicast traffic. IMO bridged traffic should never be passed into the ipv6 stack where it could be routed and even end up on interfaces which are not part of the bridge. Not knowing the code very well, I'm not sure if this scenario would really be possible, but I think that using ip6_forward() for bridged traffic may call for trouble. So in my opinion, as long as pf refragmentation for ipv6 works as it works right now, pf should not reassemble packets that might somehow end up on a bridged interface. But since reassembly happens on the inbound interface, it seems hard to know wether a packet will ever end up on a bridged interface, unless of course in the simplest case where the inbound interface itself is part of a bridge. Ultimately it might be easier to just inform the user that in case of ipv6, pf scrubbing is ok for forwarding traffic but might cause trouble (and lead to unintended routing?) on bridges. Of course the best solution seems to be for the firewall code to be able to pass a list of packets back to the caller and let it decide wether to call into the ip stack (forwarding case) or into the interface transmit code (bridging case) and thereby eliminating the need to directly use ip6_forward() in the first place. But the usage of ip6_forward() suggests that this is currently not possible and probably something not easily changed. Again, I spent only a limited amount of time to find out how all these parts interact, but I also suspect that calling ip6_forward() could prevent a subsequent firewall from denying these packets when using multiple firewalls, doesn't it?
Can you test https://reviews.freebsd.org/D3534 ? I expect that things still won't be entirely perfect, but it should at least fix the panic. The issue is that when we use pf to filter on a bridge (i.e. net.link.bridge.pfil_bridge is set) we mistakenly thing that we're routing the packet because the rvcif and the output interface (ifp) are different.
A commit references this bug: Author: kp Date: Tue Sep 1 19:04:05 UTC 2015 New revision: 287376 URL: https://svnweb.freebsd.org/changeset/base/287376 Log: pf: Fix misdetection of forwarding when net.link.bridge.pfil_bridge is set If net.link.bridge.pfil_bridge is set we can end up thinking we're forwarding in pf_test6() because the rcvif and the ifp (output interface) are different. In that case we're bridging though, and the rcvif the the bridge member on which the packet was received and ifp is the bridge itself. If we'd set dir to PF_FWD we'd end up calling ip6_forward() which is incorrect. Instead check if the rcvif is a member of the ifp bridge. (In other words, the if_bridge is the ifp's softc). If that's the case we're not forwarding but bridging. PR: 202351 Reviewed by: eri Differential Revision: https://reviews.freebsd.org/D3534 Changes: head/sys/netpfil/pf/pf.c
The panic should be fixed as of r287376.
*** Bug 202960 has been marked as a duplicate of this bug. ***
Is this a candidate for MFC?
(In reply to Brad Davis from comment #7) It might be, but I've had a report that the patch doesn't (fully?) fix the problem. I'd like to investigate that first.
Just as a fyi, I have been running with a manually ported patch from comment #4 for about a week now, and the log message does not show up in debug.log anymore, i.e. the code path now seems to be avoided. If I can test anything else (re #8) just let me know.
(In reply to Kristof Provost from comment #8) Thanks Kristof
A commit references this bug: Author: kp Date: Fri Sep 11 17:19:25 UTC 2015 New revision: 287680 URL: https://svnweb.freebsd.org/changeset/base/287680 Log: MFC r287376 pf: Fix misdetection of forwarding when net.link.bridge.pfil_bridge is set If net.link.bridge.pfil_bridge is set we can end up thinking we're forwarding in pf_test6() because the rcvif and the ifp (output interface) are different. In that case we're bridging though, and the rcvif the the bridge member on which the packet was received and ifp is the bridge itself. If we'd set dir to PF_FWD we'd end up calling ip6_forward() which is incorrect. Instead check if the rcvif is a member of the ifp bridge. (In other words, the if_bridge is the ifp's softc). If that's the case we're not forwarding but bridging. PR: 202351 Changes: _U stable/10/ stable/10/sys/netpfil/pf/pf.c
I've MFCed the fix (in r287680) on the basis that it fixed things in my test setup, and also fixed things for Dennis (as per comment #9). I'll keep in touch with Niels (who seems to still have problems) to figure out that problem.
I ran into this when upgrading from to 10.2 - I got a panic every few minutes whereas my system had been running without problems for months prior to the upgrade. I tried disabling net.link.bridge.pfil_bridge (which seems to be the situation the patch addresses) but it did not help. Unfortunately I don't have time to test this further, as I need a working system and have thus reverted back to 10.1, but I can provide some cores and configuration info if helpful for debugging. (It's a pretty simple router configuration, bridging Ethernet and Wi-Fi interfaces.) Thanks.
(In reply to Nicholas Riley from comment #13) It would be useful to have more details on your configuration, yes. I know what the problem is here, but the full fix is non-trivial and will take a while to put in place (as in, it likely won't be there for 11.0). Perhaps I can offer a workaround, or improve the situation for your case though.
Thanks, it would be nice to be able to upgrade before 10.1 goes out of support :-) Here you go. http://sabi.net/temp/202351/ That's a ocuple of core txts and my pf.conf/rc.conf.
Created attachment 165038 [details] bridge forwarding detection fix Can you give this patch a try?
Thanks for the patch! I'll see if I can clone my setup into a VM but if I can't reproduce it, it won't be until March until I am able to physically access the system again.
(In reply to Kristof Provost from comment #16) I had recurring (between every two minutes and every two hours) kernel panics after upgrading to FreeBSD 10.2-RELEASE-p9 whenever I plugged my Apple Airport basestations to one of the bridged interfaces on my bridge/firewall. I applied this patch and have now been running over 30 hours without any problems. I even got myself a IPv6 tunnel and the machine has been tunneling traffic between the local network and the IPv6 tunnel without any problems. So the patch works very well, at least for my setup.
Awesome, thank you for sharing this success! I will finally be physically back to look at this system and will retry an upgrade to 10.2 with this patch around 25-26 March. I do also have AirPort base stations on this network and wonder if this may be related.
A commit references this bug: Author: kp Date: Wed Mar 16 06:42:15 UTC 2016 New revision: 296932 URL: https://svnweb.freebsd.org/changeset/base/296932 Log: pf: Improve forwarding detection When we guess the nature of the outbound packet (output vs. forwarding) we need to take bridges into account. When bridging the input interface does not match the output interface, but we're not forwarding. Similarly, it's possible for the interface to actually be the bridge interface itself (and not a member interface). PR: 202351 MFC after: 2 weeks Changes: head/sys/netpfil/pf/pf.c