I have problem with new boards from supermicro. There are 2 nics Intel pro, which supports multiqueue. I boot all freebsd boxes from the same image as diskless and all boxes are NATs with pf. Every box works perfectly, but only 2 new boxes with this board have problems. But I cannot say where.
Situation is following - after start is everything ok. But for some hours bigger streams are starting to disconnect. Bigger streams - it means stream about a few megabytes. Pages, mostly, work.
From one or second side is connection working - but problem is only traffic over pf or probably whole FBSD box.
I try to catch any communication. I dont know, if is this THE problem, but it looks that communication ends with this:
743 3.301279 184.108.40.206 10.3.59.222 TCP 54 [TCP ACKed unseen segment] 80→55859 [RST, ACK] Seq=699393 Ack=780 Win=17520 Len=0
There is nothing in logs. System looks fine and this is the same configuration which works fine elsewhere. I try to change board for others (but the same ser. no.) and problem is the same.
part of dmesg:
igb0: <Intel(R) PRO/1000 Network Connection version - 2.4.0> port 0xd000-0xd01f mem 0xf7200000-0xf727ffff,0xf7280000-0xf7283fff irq 18 at device 0.0 on pci5
igb0: Using MSIX interrupts with 5 vectors
igb0: Ethernet address: 00:25:90:f4:db:38
igb0: Bound queue 0 to cpu 0
igb0: Bound queue 1 to cpu 1
igb0: Bound queue 2 to cpu 2
igb0: Bound queue 3 to cpu 3
pcib6: <ACPI PCI-PCI bridge> irq 19 at device 28.3 on pci0
pci6: <ACPI PCI bus> on pcib6
igb1: <Intel(R) PRO/1000 Network Connection version - 2.4.0> port 0xc000-0xc01f mem 0xf7100000-0xf717ffff,0xf7180000-0xf7183fff irq 19 at device 0.0 on pci6
igb1: Using MSIX interrupts with 5 vectors
igb1: Ethernet address: 00:25:90:f4:db:39
igb1: Bound queue 0 to cpu 4
igb1: Bound queue 1 to cpu 5
igb1: Bound queue 2 to cpu 6
igb1: Bound queue 3 to cpu 7
FreeBSD test.starnet.cz 10.0-RELEASE-p7 FreeBSD 10.0-RELEASE-p7 #1 r271046: Wed Sep 3 23:50:39 CEST 2014 email@example.com:/usr/obj/usr/src/sys/NAT-10.0 amd64
I dont know what I should check and test. Every looks fine, but not work...
After disabling multiqueue works every fine. U used
hw.igb.enable_msix=0 disables MSIX features which *does* disable multiqueue. You can however, set hw.igb.num_queues=1 to use MSIX and only 1 queue.
Can you test this with 10.1 release and 10.2 release beta when available?
I'm unable to reproduce this and the submitter has timed out.