Basically, the box will run fine for about 36 hours, after which it will panic. Failing to allocate kmem: Panic String: kmem_malloc(4096): kmem_map too small: 335544320 total allocated When you take a look at netstat -m, you'll see something like this: 68264/226/68490 mbufs in use (current/cache/total) 256/134/390/25600 mbuf clusters in use (current/cache/total/max) 256/128 mbuf+clusters out of packet secondary zone in use (current/cache) 0/39/39/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 17578K/480K/18058K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/35/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 143 requests for I/O initiated by sendfile 0 calls to protocol drain routines the current/total mbuf numbers will continually grow until they reach over a million. bytes allocated to network will also grow (current/total) until it reaches over 400MB (I increased kmem_size to stave off the inevitable a little longer) Code like the following: while true; do OLDMBUFS="`vmstat -z | grep mbuf: | awk -F, '{print $3}'`" sleep 1800 NEWMBUFS="`vmstat -z | grep mbuf: | awk -F, '{print $3}'`" DIFF=`expr ${NEWMBUFS} - ${OLDMBUFS}` date >> ${LOG} echo "New mbufs since last check: ${DIFF}" >> ${LOG} done Produces: Tue Dec 23 18:46:24 EST 2008 New mbufs since last check: 24064 Tue Dec 23 19:16:24 EST 2008 New mbufs since last check: 23775 Tue Dec 23 19:46:25 EST 2008 New mbufs since last check: 23567 Tue Dec 23 20:16:25 EST 2008 New mbufs since last check: 23322 I've tried both fxp0 and em0 with the same result. I've tried swapped everything but the hard drive as well. This problem was occurring in both 7.0-RELEASE-p6 and 7.1-RC1-2. Fix: No idea. But I'd love it if there were a way that I could remotely dig into these mbufs and find out what traffic is triggering the problem. I have vmcore files lying around that I could use. How-To-Repeat: There's a lot going on on this box as far as networking features being used.. I'm using IPv4, IPv6, if_gif, if_tap, if_tun, etc.. So it's hard to narrow it down to one thing. The latest thing I had done was to add an AAAA record for our primary domain. However, the IPv6 traffic coming in is fairly light.
State Changed From-To: open->feedback Jason, your network setup is looking very complex. Are all these aliases to lo0 really needed? If these are required for IPSec, can you give us an idea of the IPSec SPD entries being used without leaking sensitive information to the public? I think we need to also have a look at the routing tables and output of `sysctl net.inet kern.ipc` might be useful. Also give us the list of loaded modules (kldstat).
Responsible Changed From-To: freebsd-bugs->freebsd-net Over to maintainer(s).
(oops.. replying to list(s) as well this time) Hello, On Wed, Dec 31, 2008 at 05:53, <vwe@freebsd.org> wrote: > Synopsis: [panic] Leaking 50k mbufs/hour > > State-Changed-From-To: open->feedback > State-Changed-By: vwe > State-Changed-When: Wed Dec 31 13:44:37 UTC 2008 > State-Changed-Why: > Jason, > your network setup is looking very complex. > Are all these aliases to lo0 really needed? Probably not all of them anymore. The 66.29.58 addresses, while still allocated to us aren't really used anymore, so they could probably go. The rest are needed. > If these are required for IPSec, can you give us an idea > of the IPSec SPD entries being used without leaking sensitive > information to the public? You know.. IPSec is probably the only thing I'm *not* using, though you probably noticed I was at least thinking about it at one point :).. It might not be a bad idea for me to remove it from the kernel though and see what happens.. > I think we need to also have a look at the routing tables and > output of `sysctl net.inet kern.ipc` might be useful. > Also give us the list of loaded modules (kldstat). > Sure.. The routing table is managed mainly by quagga (outside of the default route) bgpd. Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire default 66.246.72.1 UGS 1 2724330 em0 10.8.0.0/25 10.8.0.2 UGS 0 0 tun10 10.8.0.2 10.8.0.1 UH 1 0 tun10 10.8.8.0/24 link#5 UC 0 0 tap0 10.8.8.6 10.8.8.5 UH 0 467 gre0 10.8.8.32 00:ff:9b:11:23:64 UHLW 3 328 tap0 1198 10.8.8.33 00:bd:7f:46:00:00 UHLW 2 13 tap0 845 10.8.8.36 00:ff:c4:72:96:17 UHLW 2 14 tap0 803 10.8.10.0/24 10.8.8.33 UG1 0 0 tap0 64.247.11.248 64.247.11.248 UH 0 0 lo0 64.247.11.249 64.247.11.249 UH 0 0 lo0 64.247.11.250 64.247.11.250 UH 0 0 lo0 64.247.11.251 64.247.11.251 UH 0 0 lo0 64.247.11.252 64.247.11.252 UH 0 0 lo0 64.247.11.253 64.247.11.253 UH 0 0 lo0 64.247.11.254 64.247.11.254 UH 0 0 lo0 64.247.11.255 64.247.11.255 UH 0 0 lo0 66.29.58.64 66.29.58.64 UH 0 0 lo0 66.29.58.65 66.29.58.65 UH 0 0 lo0 66.29.58.66 66.29.58.66 UH 0 0 lo0 66.29.58.67 66.29.58.67 UH 0 0 lo0 66.29.58.68 66.29.58.68 UH 0 0 lo0 66.29.58.69 66.29.58.69 UH 0 0 lo0 66.29.58.70 66.29.58.70 UH 0 0 lo0 66.246.72.0/24 link#1 UC 0 0 em0 66.246.72.1 00:00:5e:00:01:01 UHLW 2 0 em0 671 66.246.72.2 00:90:69:9d:24:00 UHLW 1 0 em0 1200 66.246.72.3 00:90:69:9e:74:00 UHLW 1 0 em0 1199 66.246.72.188 00:1b:21:26:13:f2 UHLW 1 3078 lo0 127.0.0.1 127.0.0.1 UH 0 54997 lo0 172.16.0.0/24 10.8.8.32 UG1 0 0 tap0 192.168.1.0/24 10.8.8.33 UG1 0 210 tap0 192.168.2.0/24 10.8.8.33 UG1 0 0 tap0 192.168.5.0/24 10.8.8.32 UG1 0 3357 tap0 192.168.15.1 192.168.15.1 UH 0 57808 lo0 192.168.21.0/24 10.8.8.33 UG1 0 0 tap0 192.168.25.0/24 10.8.8.36 UG1 0 102 tap0 192.168.30.0/24 10.8.8.33 UG1 0 0 tap0 Internet6: Destination Gateway Flags Netif Expire ::/96 ::1 UGRS lo0 => default 2001:470:1f06:208::1 UGS gif0 ::1 ::1 UHL lo0 ::ffff:0.0.0.0/96 ::1 UGRS lo0 2001:470:1f06:208::1 link#4 UHL gif0 2001:470:1f06:208::2 link#4 UHL lo0 2001:470:1f07:208::/64 fe80::1%lo0 U lo0 2001:470:1f07:208::beef:cafe link#2 UHL lo0 2001:470:89e1::/112 link#5 UC tap0 2001:470:89e1::1 00:bd:89:02:01:00 UHL lo0 fe80::/10 ::1 UGRS lo0 fe80::%em0/64 link#1 UC em0 fe80::21b:21ff:fe26:13f2%em0 00:1b:21:26:13:f2 UHL lo0 fe80::%lo0/64 fe80::1%lo0 U lo0 fe80::1%lo0 link#2 UHL lo0 fe80::%gre0/64 link#3 UC gre0 fe80::21b:21ff:fe26:13f2%gre0 link#3 UHL lo0 fe80::%gif0/64 link#4 UC gif0 fe80::21b:21ff:fe26:13f2%gif0 link#4 UHL lo0 fe80::%tap0/64 link#5 UC tap0 fe80::2bd:89ff:fe02:100%tap0 00:bd:89:02:01:00 UHL lo0 fe80::%tun10/64 link#6 UC tun10 fe80::21b:21ff:fe26:13f2%tun10 link#6 UHL lo0 ff01:1::/32 link#1 UC em0 ff01:2::/32 ::1 UC lo0 ff01:3::/32 link#3 UC gre0 ff01:4::/32 link#4 UC gif0 ff01:5::/32 link#5 UC tap0 ff01:6::/32 link#6 UC tun10 ff02::/16 ::1 UGRS lo0 ff02::%em0/32 link#1 UC em0 ff02::%lo0/32 ::1 UC lo0 ff02::%gre0/32 link#3 UC gre0 ff02::%gif0/32 link#4 UC gif0 ff02::%tap0/32 link#5 UC tap0 ff02::%tun10/32 link#6 UC tun10 -- sysctl net.inet kern.ipc -- net.inet.ip.portrange.randomtime: 45 net.inet.ip.portrange.randomcps: 10 net.inet.ip.portrange.randomized: 1 net.inet.ip.portrange.reservedlow: 0 net.inet.ip.portrange.reservedhigh: 1023 net.inet.ip.portrange.hilast: 65535 net.inet.ip.portrange.hifirst: 49152 net.inet.ip.portrange.last: 65535 net.inet.ip.portrange.first: 49152 net.inet.ip.portrange.lowlast: 600 net.inet.ip.portrange.lowfirst: 1023 net.inet.ip.forwarding: 1 net.inet.ip.redirect: 1 net.inet.ip.ttl: 64 net.inet.ip.rtexpire: 3600 net.inet.ip.rtminexpire: 10 net.inet.ip.rtmaxcache: 128 net.inet.ip.sourceroute: 0 net.inet.ip.intr_queue_maxlen: 50 net.inet.ip.intr_queue_drops: 0 net.inet.ip.accept_sourceroute: 0 net.inet.ip.keepfaith: 0 net.inet.ip.gifttl: 30 net.inet.ip.same_prefix_carp_only: 0 net.inet.ip.subnets_are_local: 0 net.inet.ip.fastforwarding: 0 net.inet.ip.maxfragpackets: 800 net.inet.ip.maxfragsperpacket: 16 net.inet.ip.fragpackets: 0 net.inet.ip.check_interface: 0 net.inet.ip.random_id: 0 net.inet.ip.sendsourcequench: 0 net.inet.ip.process_options: 1 net.inet.icmp.maskrepl: 0 net.inet.icmp.icmplim: 200 net.inet.icmp.bmcastecho: 0 net.inet.icmp.quotelen: 8 net.inet.icmp.reply_from_interface: 0 net.inet.icmp.reply_src: net.inet.icmp.icmplim_output: 1 net.inet.icmp.log_redirect: 0 net.inet.icmp.drop_redirect: 0 net.inet.icmp.maskfake: 0 net.inet.ipip.ipip_allow: 0 net.inet.tcp.rfc1323: 1 net.inet.tcp.mssdflt: 512 net.inet.tcp.keepidle: 7200000 net.inet.tcp.keepintvl: 75000 net.inet.tcp.sendspace: 32768 net.inet.tcp.recvspace: 65536 net.inet.tcp.keepinit: 75000 net.inet.tcp.delacktime: 100 net.inet.tcp.v6mssdflt: 1024 net.inet.tcp.hostcache.purge: 0 net.inet.tcp.hostcache.prune: 300 net.inet.tcp.hostcache.expire: 3600 net.inet.tcp.hostcache.count: 472 net.inet.tcp.hostcache.bucketlimit: 30 net.inet.tcp.hostcache.hashsize: 512 net.inet.tcp.hostcache.cachelimit: 15360 net.inet.tcp.recvbuf_max: 262144 net.inet.tcp.recvbuf_inc: 16384 net.inet.tcp.recvbuf_auto: 1 net.inet.tcp.insecure_rst: 0 net.inet.tcp.rfc3390: 1 net.inet.tcp.rfc3042: 1 net.inet.tcp.drop_synfin: 0 net.inet.tcp.delayed_ack: 1 net.inet.tcp.blackhole: 0 net.inet.tcp.log_in_vain: 0 net.inet.tcp.sendbuf_max: 262144 net.inet.tcp.sendbuf_inc: 8192 net.inet.tcp.sendbuf_auto: 1 net.inet.tcp.tso: 1 net.inet.tcp.newreno: 1 net.inet.tcp.local_slowstart_flightsize: 4 net.inet.tcp.slowstart_flightsize: 1 net.inet.tcp.path_mtu_discovery: 1 net.inet.tcp.reass.overflows: 0 net.inet.tcp.reass.maxqlen: 48 net.inet.tcp.reass.cursegments: 0 net.inet.tcp.reass.maxsegments: 1600 net.inet.tcp.sack.globalholes: 0 net.inet.tcp.sack.globalmaxholes: 65536 net.inet.tcp.sack.maxholes: 128 net.inet.tcp.sack.enable: 1 net.inet.tcp.inflight.stab: 20 net.inet.tcp.inflight.max: 1073725440 net.inet.tcp.inflight.min: 6144 net.inet.tcp.inflight.rttthresh: 10 net.inet.tcp.inflight.debug: 0 net.inet.tcp.inflight.enable: 1 net.inet.tcp.isn_reseed_interval: 0 net.inet.tcp.icmp_may_rst: 1 net.inet.tcp.pcbcount: 134 net.inet.tcp.do_tcpdrain: 1 net.inet.tcp.tcbhashsize: 512 net.inet.tcp.log_debug: 0 net.inet.tcp.minmss: 216 net.inet.tcp.syncache.rst_on_sock_fail: 1 net.inet.tcp.syncache.rexmtlimit: 3 net.inet.tcp.syncache.hashsize: 512 net.inet.tcp.syncache.count: 13 net.inet.tcp.syncache.cachelimit: 15360 net.inet.tcp.syncache.bucketlimit: 30 net.inet.tcp.syncookies_only: 0 net.inet.tcp.syncookies: 1 net.inet.tcp.timer_race: 0 net.inet.tcp.finwait2_timeout: 60000 net.inet.tcp.fast_finwait2_recycle: 0 net.inet.tcp.always_keepalive: 1 net.inet.tcp.rexmit_slop: 200 net.inet.tcp.rexmit_min: 30 net.inet.tcp.msl: 30000 net.inet.tcp.nolocaltimewait: 0 net.inet.tcp.maxtcptw: 5120 net.inet.udp.checksum: 1 net.inet.udp.maxdgram: 9216 net.inet.udp.recvspace: 42080 net.inet.udp.soreceive_dgram_enabled: 0 net.inet.udp.blackhole: 0 net.inet.udp.log_in_vain: 0 net.inet.esp.esp_enable: 1 net.inet.ah.ah_cleartos: 1 net.inet.ah.ah_enable: 1 net.inet.ipcomp.ipcomp_enable: 0 net.inet.sctp.enable_sack_immediately: 0 net.inet.sctp.udp_tunneling_port: 0 net.inet.sctp.udp_tunneling_for_client_enable: 0 net.inet.sctp.mobility_fasthandoff: 0 net.inet.sctp.mobility_base: 0 net.inet.sctp.default_frag_interleave: 1 net.inet.sctp.default_cc_module: 0 net.inet.sctp.log_level: 0 net.inet.sctp.max_retran_chunk: 30 net.inet.sctp.min_residual: 1452 net.inet.sctp.strict_data_order: 0 net.inet.sctp.abort_at_limit: 0 net.inet.sctp.hb_max_burst: 4 net.inet.sctp.do_sctp_drain: 1 net.inet.sctp.max_chained_mbufs: 5 net.inet.sctp.abc_l_var: 1 net.inet.sctp.nat_friendly: 1 net.inet.sctp.auth_disable: 0 net.inet.sctp.asconf_auth_nochk: 0 net.inet.sctp.early_fast_retran_msec: 250 net.inet.sctp.early_fast_retran: 0 net.inet.sctp.cwnd_maxburst: 1 net.inet.sctp.cmt_pf: 0 net.inet.sctp.cmt_use_dac: 0 net.inet.sctp.cmt_on_off: 0 net.inet.sctp.outgoing_streams: 10 net.inet.sctp.add_more_on_output: 1452 net.inet.sctp.path_rtx_max: 5 net.inet.sctp.assoc_rtx_max: 10 net.inet.sctp.init_rtx_max: 8 net.inet.sctp.valid_cookie_life: 60000 net.inet.sctp.init_rto_max: 60000 net.inet.sctp.rto_initial: 3000 net.inet.sctp.rto_min: 1000 net.inet.sctp.rto_max: 60000 net.inet.sctp.secret_lifetime: 3600 net.inet.sctp.shutdown_guard_time: 180 net.inet.sctp.pmtu_raise_time: 600 net.inet.sctp.heartbeat_interval: 30000 net.inet.sctp.asoc_resource: 10 net.inet.sctp.sys_resource: 1000 net.inet.sctp.sack_freq: 2 net.inet.sctp.delayed_sack_time: 200 net.inet.sctp.chunkscale: 10 net.inet.sctp.min_split_point: 2904 net.inet.sctp.pcbhashsize: 256 net.inet.sctp.tcbhashsize: 1024 net.inet.sctp.maxchunks: 3200 net.inet.sctp.maxburst: 4 net.inet.sctp.peer_chkoh: 256 net.inet.sctp.strict_init: 1 net.inet.sctp.loopback_nocsum: 1 net.inet.sctp.strict_sacks: 0 net.inet.sctp.ecn_nonce: 0 net.inet.sctp.ecn_enable: 1 net.inet.sctp.auto_asconf: 1 net.inet.sctp.recvspace: 233016 net.inet.sctp.sendspace: 233016 net.inet.ipsec.def_policy: 1 net.inet.ipsec.esp_trans_deflev: 1 net.inet.ipsec.esp_net_deflev: 1 net.inet.ipsec.ah_trans_deflev: 1 net.inet.ipsec.ah_net_deflev: 1 net.inet.ipsec.ah_cleartos: 1 net.inet.ipsec.ah_offsetmask: 0 net.inet.ipsec.dfbit: 0 net.inet.ipsec.ecn: 0 net.inet.ipsec.debug: 0 net.inet.ipsec.esp_randpad: -1 net.inet.ipsec.crypto_support: 50331648 net.inet.raw.recvspace: 9216 net.inet.raw.maxdgram: 9216 net.inet.accf.unloadable: 0 kern.ipc.maxsockbuf: 262144 kern.ipc.sockbuf_waste_factor: 8 kern.ipc.somaxconn: 128 kern.ipc.max_linkhdr: 16 kern.ipc.max_protohdr: 60 kern.ipc.max_hdr: 76 kern.ipc.max_datalen: 128 kern.ipc.nmbjumbo16: 3200 kern.ipc.nmbjumbo9: 6400 kern.ipc.nmbjumbop: 12800 kern.ipc.nmbclusters: 25600 kern.ipc.piperesizeallowed: 1 kern.ipc.piperesizefail: 0 kern.ipc.pipeallocfail: 0 kern.ipc.pipefragretry: 0 kern.ipc.pipekva: 1409024 kern.ipc.maxpipekva: 26841088 kern.ipc.msgseg: 2048 kern.ipc.msgssz: 8 kern.ipc.msgtql: 40 kern.ipc.msgmnb: 2048 kern.ipc.msgmni: 40 kern.ipc.msgmax: 16384 kern.ipc.semaem: 16384 kern.ipc.semvmx: 32767 kern.ipc.semusz: 812 kern.ipc.semume: 100 kern.ipc.semopm: 100 kern.ipc.semmsl: 512 kern.ipc.semmnu: 256 kern.ipc.semmns: 512 kern.ipc.semmni: 128 kern.ipc.semmap: 30 kern.ipc.shm_allow_removed: 0 kern.ipc.shm_use_phys: 0 kern.ipc.shmall: 131072 kern.ipc.shmseg: 128 kern.ipc.shmmni: 192 kern.ipc.shmmin: 1 kern.ipc.shmmax: 536870912 kern.ipc.maxsockets: 25600 kern.ipc.numopensockets: 348 kern.ipc.nsfbufsused: 0 kern.ipc.nsfbufspeak: 37 kern.ipc.nsfbufs: 6656 -- kldstat -- Id Refs Address Size Name 1 7 0xc0400000 4a4314 kernel 2 1 0xc08a5000 6a2c4 acpi.ko 3 1 0xc3e2b000 4000 if_gre.ko 4 1 0xc3e36000 5000 if_gif.ko 5 1 0xc3eda000 33000 pf.ko 6 1 0xc433f000 5000 if_tap.ko
Adding a bit more info.. Here's the netstat -m and vmstat -z output after the mbufs have had a while to build up: -- netstat -m -- 959510/235/959745 mbufs in use (current/cache/total) 257/133/390/25600 mbuf clusters in use (current/cache/total/max) 257/127 mbuf+clusters out of packet secondary zone in use (current/cache) 0/117/117/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 240391K/792K/241184K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/152/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 2716 requests for I/O initiated by sendfile 0 calls to protocol drain routines -- vmstat -z -- ITEM SIZE LIMIT USED FREE REQUESTS FAILURES UMA Kegs: 128, 0, 99, 21, 99, 0 UMA Zones: 120, 0, 99, 21, 99, 0 UMA Slabs: 64, 0, 1132, 48, 15590, 0 UMA RCntSlabs: 104, 0, 312, 21, 856, 0 UMA Hash: 128, 0, 4, 26, 7, 0 16 Bucket: 76, 0, 21, 29, 64, 0 32 Bucket: 140, 0, 32, 24, 107, 0 64 Bucket: 268, 0, 34, 8, 142, 0 128 Bucket: 524, 0, 628, 9, 45698, 5387 VM OBJECT: 124, 0, 9136, 32404, 20553589, 0 MAP: 140, 0, 7, 49, 7, 0 KMAP ENTRY: 68, 56560, 17, 487, 153898, 0 MAP ENTRY: 68, 0, 15360, 2336, 53481139, 0 DP fakepg: 72, 0, 0, 0, 0, 0 mt_zone: 72, 0, 194, 71, 194, 0 16: 16, 0, 1996, 643, 81641919, 0 32: 32, 0, 2542, 283, 9470304, 0 64: 64, 0, 4584, 7039, 61275174, 0 128: 128, 0, 1101, 549, 6280868, 0 256: 256, 0, 1299, 366, 20697494, 0 512: 512, 0, 145, 279, 79529, 0 1024: 1024, 0, 173, 71, 532899, 0 2048: 2048, 0, 104, 146, 14198, 0 4096: 4096, 0, 266, 63, 1242097, 0 Files: 76, 0, 1595, 655, 16940144, 0 TURNSTILE: 76, 0, 295, 41, 295, 0 umtx pi: 52, 0, 0, 0, 0, 0 PROC: 696, 0, 229, 46, 541535, 0 THREAD: 552, 0, 283, 11, 952, 0 UPCALL: 44, 0, 0, 0, 0, 0 SLEEPQUEUE: 32, 0, 295, 157, 295, 0 VMSPACE: 232, 0, 196, 59, 541501, 0 cpuset: 40, 0, 2, 182, 2, 0 mbuf_packet: 256, 0, 258, 126, 20757971, 0 mbuf: 256, 0, 960618, 33, 116817015, 0 mbuf_cluster: 2048, 25600, 384, 6, 384, 0 mbuf_jumbo_pagesize: 4096, 12800, 0, 117, 19593255, 0 mbuf_jumbo_9k: 9216, 6400, 0, 0, 0, 0 mbuf_jumbo_16k: 16384, 3200, 0, 0, 0, 0 mbuf_ext_refcnt: 4, 0, 0, 406, 26807, 0 ACL UMA zone: 388, 0, 0, 20, 65073710, 0 g_bio: 132, 0, 12, 423, 10043465, 0 ata_request: 192, 0, 3, 337, 3212743, 0 ata_composite: 184, 0, 0, 0, 0, 0 cryptop: 60, 0, 0, 0, 0, 0 cryptodesc: 56, 0, 0, 0, 0, 0 VNODE: 276, 0, 19330, 33618, 3235279, 0 VNODEPOLL: 64, 0, 0, 0, 0, 0 NAMEI: 1024, 0, 0, 28, 38128632, 0 S VFS Cache: 68, 0, 20134, 23490, 10825579, 0 L VFS Cache: 291, 0, 27, 207, 175327, 0 NFSMOUNT: 560, 0, 0, 0, 0, 0 NFSNODE: 452, 0, 0, 0, 0, 0 DIRHASH: 1024, 0, 1946, 170, 28110, 0 pipe: 396, 0, 89, 31, 229902, 0 ksiginfo: 80, 0, 248, 40, 248, 0 itimer: 220, 0, 0, 36, 1, 0 KNOTE: 68, 0, 175, 217, 2157098, 0 socket: 416, 25605, 404, 127, 3609347, 0 unpcb: 168, 25622, 278, 136, 1352116, 0 ipq: 32, 904, 0, 226, 42, 0 udpcb: 180, 25608, 13, 75, 1774380, 0 inpcb: 180, 25608, 155, 197, 482839, 0 tcpcb: 464, 25600, 92, 60, 482839, 0 tcptw: 52, 5184, 63, 297, 96034, 0 syncache: 104, 15392, 9, 102, 474336, 0 hostcache: 76, 15400, 626, 124, 4537, 0 tcpreass: 20, 1690, 0, 169, 92838, 0 sackhole: 20, 0, 0, 169, 7789, 0 sctp_ep: 804, 25600, 0, 0, 0, 0 sctp_asoc: 1412, 40000, 0, 0, 0, 0 sctp_laddr: 24, 80040, 0, 145, 28, 0 sctp_raddr: 400, 80000, 0, 0, 0, 0 sctp_chunk: 92, 400008, 0, 0, 0, 0 sctp_readq: 76, 400000, 0, 0, 0, 0 sctp_stream_msg_out: 60, 400050, 0, 0, 0, 0 sctp_asconf: 24, 400055, 0, 0, 0, 0 sctp_asconf_ack: 24, 400055, 0, 0, 0, 0 ripcb: 180, 25608, 1, 43, 2, 0 rtentry: 124, 0, 78, 46, 3549, 0 SWAPMETA: 276, 121576, 158, 108, 19610, 0 Mountpoints: 716, 0, 6, 4, 6, 0 FFS inode: 128, 0, 19299, 19191, 3235170, 0 FFS1 dinode: 128, 0, 0, 0, 0, 0 FFS2 dinode: 256, 0, 19299, 16011, 3235170, 0 pfsrctrpl: 124, 10013, 0, 0, 0, 0 pfrulepl: 828, 0, 12, 8, 12, 0 pfstatepl: 284, 10010, 0, 0, 0, 0 pfaltqpl: 224, 0, 0, 0, 0, 0 pfpooladdrpl: 68, 0, 6, 106, 6, 0 pfrktable: 1240, 1002, 1, 5, 2, 0 pfrkentry: 156, 200000, 3, 47, 3, 0 pfrkentry2: 156, 0, 0, 0, 0, 0 pffrent: 16, 5075, 0, 0, 0, 0 pffrag: 48, 0, 0, 0, 0, 0 pffrcache: 48, 10062, 0, 0, 0, 0 pffrcent: 12, 50141, 0, 0, 0, 0 pfstatescrub: 28, 0, 0, 0, 0, 0 pfiaddrpl: 100, 0, 0, 0, 0, 0 pfospfen: 108, 0, 696, 24, 696, 0 pfosfp: 28, 0, 407, 228, 407, 0
I narrowed this bug down to the following patch to djb's ucspi-tcp (adds ipv6 functionality): http://www.fefe.de/ucspi/ I don't think that userland processes should be able to wreak that much havoc on the network stack. Another thing of note is that even if I kill the processes causing the problem, the mbufs are never reclaimed. Seems like a permanent leak. When I got rid of the ipv6 patch, the mbufs stopped building up and everything has been fine.. Note that the ipv6 traffic for this process was fairly minimal.
For bugs matching the following criteria: Status: In Progress Changed: (is less than) 2014-06-01 Reset to default assignee and clear in-progress tags. Mail being skipped