Bug 156667 - [em] em0 fails to init on CURRENT after March 17
Summary: [em] em0 fails to init on CURRENT after March 17
Status: Closed FIXED
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 9.0-CURRENT
Hardware: Any Any
: Normal Affects Only Me
Assignee: freebsd-net (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2011-04-27 05:00 UTC by Marcus Reid
Modified: 2015-05-04 03:38 UTC (History)
0 users

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Marcus Reid 2011-04-27 05:00:18 UTC
em0 fails to init on kernel csupped and built on Apr 26, 2011, but is fine on kernel built on Mar. 17, 2011.

Messages before:

Mar 17 23:08:09 austin kernel: em0: <Intel(R) PRO/1000 Network Connection 7.1.9> port 0xecc0-0xecdf mem 0xfebe0000-0xfebfffff,0xfebdb000-0xfebdbfff irq 21 at device 25.0 on pci0
Mar 17 23:08:09 austin kernel: em0: Using an MSI interrupt
Mar 17 23:08:09 austin kernel: em0: Ethernet address: 00:1e:4f:e7:f6:51

Messages after:

Apr 26 19:35:59 austin kernel: em0: <Intel(R) PRO/1000 Network Connection 7.2.3> port 0xecc0-0xecdf mem 0xfebe0000-0xfebfffff,0xfebdb000-0xfebdbfff irq 21 at device 25.0 on pci0
Apr 26 19:35:59 austin kernel: em0: Using an MSI interrupt
Apr 26 19:35:59 austin kernel: em0: Ethernet address: 00:1e:4f:e7:f6:51
Apr 26 19:35:59 austin kernel: em0: Could not setup receive structures
Apr 26 19:35:59 austin kernel: em0: Could not setup receive structures
Apr 26 19:36:04 austin kernel: em0: link state changed to UP
Apr 26 19:36:04 austin kernel: em0: Could not setup receive structures

pciconf -vl:

em0@pci0:0:25:0:        class=0x020000 card=0x02111028 chip=0x10bd8086 rev=0x02 hdr=0x00
    vendor     = 'Intel Corporation'
    device     = 'Intel 82566DM Gigabit Ethernet Adapter (82566DM)'
    class      = network
    subclass   = ethernet

System is a Dell Optiplex 755 using onboard ethernet.

There were a number of changes, some major, since Apr 26.

How-To-Repeat: Boot, network interface doesn't seem to work.  No arp from default gateway, etc.
Comment 1 Mark Linimon freebsd_committer freebsd_triage 2011-05-03 07:12:14 UTC
Responsible Changed
From-To: freebsd-bugs->freebsd-net

Over to maintainer(s).
Comment 2 Arnaud 2011-05-03 17:22:47 UTC
could you include the output of `netstat -m' ? it is all likely to be
yet another "`nmbclusterrs' too low issue", in which case bumping
`kern.ipc.nmbclusters' will do the job.
Comment 3 Marcus Reid 2011-05-06 03:53:27 UTC
On Tue, May 03, 2011 at 12:22:47PM -0400, Arnaud Lacombe wrote:
> could you include the output of `netstat -m' ? it is all likely to be
> yet another "`nmbclusterrs' too low issue", in which case bumping
> `kern.ipc.nmbclusters' will do the job.

Here's default, with problem:

  1025/640/1665 mbufs in use (current/cache/total)
  1023/297/1320/25600 mbuf clusters in use (current/cache/total/max)
  1023/257 mbuf+clusters out of packet secondary zone in use (current/cache)
  0/35/35/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
  0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
  0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
  2302K/894K/3196K bytes allocated to network (current/cache/total)
  0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
  0/0/0 requests for jumbo clusters denied (4k/9k/16k)
  0/0/0 sfbufs in use (current/peak/max)
  0 requests for sfbufs denied
  0 requests for sfbufs delayed
  0 requests for I/O initiated by sendfile
  0 calls to protocol drain routines

Here's with kern.ipc.nmbclusters bumped to 100000, same problem:

  1025/640/1665 mbufs in use (current/cache/total)
  1023/305/1328/100000 mbuf clusters in use (current/cache/total/max)
  1023/257 mbuf+clusters out of packet secondary zone in use (current/cache)
  0/35/35/50000 4k (page size) jumbo clusters in use (current/cache/total/max)
  0/0/0/25000 9k jumbo clusters in use (current/cache/total/max)
  0/0/0/12500 16k jumbo clusters in use (current/cache/total/max)
  2302K/910K/3212K bytes allocated to network (current/cache/total)
  0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
  0/0/0 requests for jumbo clusters denied (4k/9k/16k)
  0/0/0 sfbufs in use (current/peak/max)
  0 requests for sfbufs denied
  0 requests for sfbufs delayed
  0 requests for I/O initiated by sendfile
  0 calls to protocol drain routines
Comment 4 Marcus Reid 2011-05-06 04:10:24 UTC
Looks like a fix for this problem was just checked in 9 hours ago.
This ticket can be closed.