We've got a strange problem with lagg(4) interfaces built on Intel's
82580 chipset igb (dual port 1Gb card).
We cannot send over lagg interface more than ~1Gb/s, yet receiving
~2GB over it.
Looks like something indicating the problem can be seen here:
anri@host:[8:13]~#ifstat -i lagg0 -i igb1 -i igb3 1
lagg0 igb1 igb3
KB/s in KB/s out KB/s in KB/s out KB/s in KB/s out
9116.50 26515.16 4147.70 28871.43 5004.86 23683.31
8423.08 26544.62 3853.22 28980.60 4594.21 23520.97
8796.48 26395.28 4248.46 28344.00 4567.42 23978.17
Note (IN) traffic - everything's ok, about 50% on each igb interfaces
and total summary on lagg0.
But (OUT) traffic looks weird - there is more traffic on the single
igb1 than on lagg0!
Tried with default driver came with the system, also the new one
igb-2.3.10 from Intel site - no luck.
Set net.link.lagg.0.use_flowid=0 not solve the problem.
Testing lagg interface built on other cards (em, bce) on the same
machine shows expected normal behavior for both directions.
My system is freebsd 8.4, use the e1000 driver from freebsd 8.3 , the result is OK .
With outbound bytes double the expected value on an igb interface,
you're probably running into
Over to maintainer(s).
This seems related to the other issues where folks disable LRO. Does this help with your test?
For bugs that match the following
- Status Is In progress
- Untouched since 2018-01-01.
- Affects Base System OR Documentation
Reset to open status.
I did a quick pass but if you are getting this email it might be worthwhile to double check to see if this bug ought to be closed.
Feedback timeout (over 2 years). Also, this is believed to work just fine for all supported branches.