Bug 192825 - Continual Core Dumps vm_page_unwire: wire count is zero
Summary: Continual Core Dumps vm_page_unwire: wire count is zero
Status: Closed Overcome By Events
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 9.2-RELEASE
Hardware: i386 Any
: Normal Affects Many People
Assignee: freebsd-bugs (Nobody)
URL: http://www.mhix.org/FreeBSDCores/92-i...
Keywords:
Depends on:
Blocks:
 
Reported: 2014-08-19 09:59 UTC by Michelle Sullivan
Modified: 2022-06-13 12:47 UTC (History)
4 users (show)

See Also:
cryptogodfatherva45: maintainer-feedback+


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Michelle Sullivan 2014-08-19 09:59:35 UTC
This is happening continuously.

ParentHost is a VM (VirtualBox) with vbox-ose-4.3.12 on 9.2-amd64 with 9.2-i386 as the guest (producing this core) ... I don't see the same issue on 9.2-amd64 guest.

Rest of the dump here: http://www.mhix.org/FreeBSDCores/92-i386/Tue.Aug.19.11.34.18.CEST.2014/

These are happening every 30-60 minutes or so so can get lots of them, the host is a poudriere build server with 2 jails (one for pkg_*, the other for pkgng)

92i386 dumped core - see /var/crash/vmcore.0

Tue Aug 19 11:28:06 CEST 2014

FreeBSD 92i386 9.2-RELEASE-p10 FreeBSD 9.2-RELEASE-p10 #0: Tue Jul  8 10:17:36 UTC 2014     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  i386

panic: vm_page_unwire: page 0xc21c7b80's wire count is zero

GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-marcel-freebsd"...

Unread portion of the kernel message buffer:
panic: vm_page_unwire: page 0xc21c7b80's wire count is zero
cpuid = 1
KDB: stack backtrace:
#0 0xc0b1829f at kdb_backtrace+0x4f
#1 0xc0adf51f at panic+0x16f
#2 0xc0d75c6a at vm_page_unwire+0xfa
#3 0xc0d61993 at vm_fault_unwire+0xd3
#4 0xc0d69cc3 at vm_map_delete+0x173
#5 0xc0d69fb1 at vm_map_remove+0x51
#6 0xc0d677a0 at kmem_free+0x30
#7 0xc0d5b5b6 at page_free+0x46
#8 0xc0d5ce99 at uma_large_free+0x79
#9 0xc0ac667b at free+0xab
#10 0xc844b481 at arc_buf_data_free+0xd1
#11 0xc844b610 at arc_buf_destroy+0x180
#12 0xc844f035 at arc_evict+0x2d5
#13 0xc844f81d at arc_get_data_buf+0x22d
#14 0xc844fea7 at arc_buf_alloc+0x97
#15 0xc8458491 at dbuf_read+0x111
#16 0xc8460675 at dmu_buf_hold+0x105
#17 0xc84541e8 at bpobj_enqueue+0xf8
Uptime: 29m17s
Physical memory: 3563 MB
Dumping 494 MB: 479 463 447 431 415 399 383 367 351 335 319 303 287 271 255 239 223 207 191 175 159 143 127 111 95 79 63 47 31 15

Reading symbols from /boot/kernel/zfs.ko...Reading symbols from /boot/kernel/zfs.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/zfs.ko
Reading symbols from /boot/kernel/opensolaris.ko...Reading symbols from /boot/kernel/opensolaris.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/opensolaris.ko
Reading symbols from /boot/kernel/fdescfs.ko...Reading symbols from /boot/kernel/fdescfs.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/fdescfs.ko
Reading symbols from /boot/kernel/nullfs.ko...Reading symbols from /boot/kernel/nullfs.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/nullfs.ko
Reading symbols from /boot/kernel/tmpfs.ko...Reading symbols from /boot/kernel/tmpfs.ko.symbols...done.
done.
Loaded symbols for /boot/kernel/tmpfs.ko
#0  doadump (textdump=1) at pcpu.h:249
249	pcpu.h: No such file or directory.
	in pcpu.h
(kgdb) #0  doadump (textdump=1) at pcpu.h:249
#1  0xc0adf265 in kern_reboot (howto=260)
    at /usr/src/sys/kern/kern_shutdown.c:449
#2  0xc0adf562 in panic (fmt=<value optimized out>)
    at /usr/src/sys/kern/kern_shutdown.c:637
#3  0xc0d75c6a in vm_page_unwire (m=0xc21c7b80, activate=1)
    at /usr/src/sys/vm/vm_page.c:2018
#4  0xc0d61993 in vm_fault_unwire (map=0xc1bdd08c, start=3368665088, 
    end=3368681472, fictitious=0) at /usr/src/sys/vm/vm_fault.c:1225
#5  0xc0d69cc3 in vm_map_delete (map=0xc1bdd08c, start=3368665088, 
    end=3368681472) at /usr/src/sys/vm/vm_map.c:2676
#6  0xc0d69fb1 in vm_map_remove (map=0xc1bdd08c, start=3368665088, 
    end=3368681472) at /usr/src/sys/vm/vm_map.c:2866
#7  0xc0d677a0 in kmem_free (map=0xc1bdd08c, addr=3368665088, size=16384)
    at /usr/src/sys/vm/vm_kern.c:214
#8  0xc0d5b5b6 in page_free (mem=0xc8c9c000, size=16384, flags=34 '"')
    at /usr/src/sys/vm/uma_core.c:1082
#9  0xc0d5ce99 in uma_large_free (slab=0xc8c826a8)
    at /usr/src/sys/vm/uma_core.c:3086
#10 0xc0ac667b in free (addr=0xc8c9c000, mtp=0xc859711c)
    at /usr/src/sys/kern/kern_malloc.c:572
#11 0xc844b481 in arc_buf_data_free (buf=0xc8af39d8, 
    free_func=0xc84de5b0 <zio_buf_free>)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1633
#12 0xc844b610 in arc_buf_destroy (buf=0xc8af39d8, recycle=0, all=0)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1655
#13 0xc844f035 in arc_evict (state=0xc8582580, spa=0, bytes=131072, 
    recycle=1, type=ARC_BUFC_METADATA)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2010
#14 0xc844f81d in arc_get_data_buf (buf=0xdbae8028)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:2742
#15 0xc844fea7 in arc_buf_alloc (spa=0xc83a8000, size=131072, tag=0xdca142a8, 
    type=ARC_BUFC_METADATA)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c:1488
#16 0xc8458491 in dbuf_read (db=0xdca142a8, zio=0xd7f372e8, flags=2)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dbuf.c:554
#17 0xc8460675 in dmu_buf_hold (os=0xc8a9b400, object=455, offset=0, 
    tag=0xca157b98, dbp=0xca157bc8, flags=0)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:153
#18 0xc84541e8 in bpobj_enqueue (bpo=0xca157b98, bp=0xca8dc000, tx=0xca3e8600)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/bpobj.c:476
#19 0xc8483b79 in dsl_deadlist_insert (dl=0xc8ae5c20, bp=0xca8dc000, 
    tx=0xca3e8600)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_deadlist.c:190
#20 0xc8487b9f in deadlist_enqueue_cb (arg=0xc8ae5c20, bp=0xca8dc000, 
    tx=0xca3e8600)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:368
#21 0xc8453dc1 in bplist_iterate (bpl=0xc8ae5c98, 
    func=0xc8487b80 <deadlist_enqueue_cb>, arg=0xc8ae5c20, tx=0xca3e8600)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/bplist.c:72
#22 0xc8488cc1 in dsl_pool_sync (dp=0xc8a90000, txg=106644)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:451
#23 0xc84aad7d in spa_sync (spa=0xc83a8000, txg=106644)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6358
#24 0xc84b52d5 in txg_sync_thread (arg=0xc8a90000)
    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:511
#25 0xc0aaaf4f in fork_exit (callout=0xc84b5190 <txg_sync_thread>, 
    arg=0xc8a90000, frame=0xe7f28d08) at /usr/src/sys/kern/kern_fork.c:992
#26 0xc0f36a54 in fork_trampoline () at /usr/src/sys/i386/i386/exception.s:279
(kgdb)
Comment 1 Michelle Sullivan 2015-02-13 00:28:06 UTC
Lots more cores in the same place (URL) and in the i386 directory.  Issue appears to be limited to i386 platform with ZFS/ARC when physical memory exhaustion occurs.

Panics are very very common when (for example) checking out the ports tree onto a ZFS FS (eg using poudriere)...  

Remove all swap devices and no panics occur.
Comment 2 Michelle Sullivan 2015-02-13 00:29:34 UTC
Checking to see if the same occurs on 9.3 soon (I have had many panics on 9.3 i386, but until now no dumpdev configured)
Comment 3 Masse Nicolas 2015-06-01 09:10:06 UTC
I have the same issue here on FreeBSD-9.3. And I'm working on a physical machine, not a Virtual one.
Stacktrace :
#0  doadump (textdump=16192120) at ../../../kern/kern_shutdown.c:277
#1  0xffffffff802e8d5c in db_fncall_generic (addr=-2142138064, rv=0xffffff8000f71218, nargs=0, args=0xffffff8000f711c0) at ../../../ddb/db_command.c:573
#2  0xffffffff802e8c65 in db_fncall (dummy1=-549739621760, dummy2=0, dummy3=-549739621568, dummy4=0xffffff8000f71290 "<o\211\200����\204\034�\200����")
    at ../../../ddb/db_command.c:625
#3  0xffffffff802e8899 in db_command (last_cmdp=0xffffffff80bf0d20, cmd_table=0x0, dopager=0) at ../../../ddb/db_command.c:449
#4  0xffffffff802e8a69 in db_command_script (command=0xffffffff80bf1c85 "call doadump()") at ../../../ddb/db_command.c:520
#5  0xffffffff802edefe in db_script_exec (scriptname=0xffffffff80896f3c "kdb.enter.default", warnifnotfound=0) at ../../../ddb/db_script.c:302
#6  0xffffffff802edfc5 in db_script_kdbenter (eventname=0xffffffff808e23a2 "panic") at ../../../ddb/db_script.c:325
#7  0xffffffff802eb80b in db_trap (type=3, code=0) at ../../../ddb/db_main.c:230
#8  0xffffffff8055d56b in kdb_trap (type=3, code=0, tf=0xffffff8000f71770) at ../../../kern/subr_kdb.c:651
#9  0xffffffff80804b72 in trap (frame=0xffffff8000f71770) at ../../../amd64/amd64/trap.c:572
#10 0xffffffff807e7083 in calltrap () at ../../../amd64/amd64/exception.S:232
#11 0xffffffff8055cfd5 in breakpoint () at cpufunc.h:63
#12 0xffffffff8055cfc2 in kdb_enter (why=0xffffffff808e23a2 "panic", msg=0xffffffff808e23a2 "panic") at ../../../kern/subr_kdb.c:440
#13 0xffffffff8051a181 in panic (fmt=0xffffffff8092caa0 "vm_page_unwire: page %p's wire count is zero") at ../../../kern/kern_shutdown.c:736
#14 0xffffffff807d0836 in vm_page_unwire (m=0xfffffe00d91e90c8, activate=1) at ../../../vm/vm_page.c:2219
#15 0xffffffff807b8822 in vm_fault_unwire (map=0xfffffe0076ddb4b0, start=34366525440, end=34366529536, fictitious=0) at ../../../vm/vm_fault.c:1242
#16 0xffffffff807c18e9 in vm_map_entry_unwire (map=0xfffffe0076ddb4b0, entry=0xfffffe008d434600) at ../../../vm/vm_map.c:2754
#17 0xffffffff807c1e44 in vm_map_delete (map=0xfffffe0076ddb4b0, start=34366525440, end=34366529536) at ../../../vm/vm_map.c:2916
#18 0xffffffff807c5a9c in sys_munmap (td=0xfffffe0076d29490, uap=0xffffff8000f71bc0) at ../../../vm/vm_mmap.c:613
#19 0xffffffff8080626c in syscallenter (td=0xfffffe0076d29490, sa=0xffffff8000f71bb0) at subr_syscall.c:133
#20 0xffffffff80805d79 in amd64_syscall (td=0xfffffe0076d29490, traced=0) at ../../../amd64/amd64/trap.c:979
#21 0xffffffff807e7367 in Xfast_syscall () at ../../../amd64/amd64/exception.S:391
#22 0x00000008011ec16c in ?? ()
Comment 4 Masse Nicolas 2015-06-01 09:14:21 UTC
FYI :
(kgdb) f 14
#14 0xffffffff807d0836 in vm_page_unwire (m=0xfffffe00d91e90c8, activate=1) at ../../../vm/vm_page.c:2219
2219	../../../vm/vm_page.c: No such file or directory.
	in ../../../vm/vm_page.c
(kgdb) p *m
$1 = {
  pageq = {
    tqe_next = 0xfffffe00d91a4608, 
    tqe_prev = 0xfffffe00d91a3618
  }, 
  listq = {
    tqe_next = 0x0, 
    tqe_prev = 0xfffffe0004cfc6a0
  }, 
  left = 0x0, 
  right = 0x0, 
  object = 0xfffffe0004cfc658, 
  pindex = 0, 
  phys_addr = 3352719360, 
  md = {
    pv_list = {
      tqh_first = 0xfffffe0004d86040, 
      tqh_last = 0xfffffe0004d86048
    }, 
    pat_mode = 6
  }, 
  queue = 1 '\001', 
  segind = 2 '\002', 
  hold_count = 0, 
  order = 13 '\r', 
  pool = 0 '\0', 
  cow = 0, 
  wire_count = 0, 
  aflags = 3 '\003', 
  flags = 0 '\0', 
  oflags = 0, 
  act_count = 5 '\005', 
  busy = 0 '\0', 
  valid = 255 '�', 
  dirty = 255 '�'
}
Comment 5 Michelle Sullivan 2015-06-01 10:41:04 UTC
Nicolas: Single or multiple CPU?
Comment 6 Masse Nicolas 2015-06-01 10:50:37 UTC
Multiple (one quadcore-cpu to be exact)
Comment 7 Michelle Sullivan 2015-06-01 11:23:43 UTC
Probably not going to help you then, but I have found that switching to a single CPU stops the panics.

From my investigations the ARC/metadata will blow out the kmem until it's exhausted on multiple CPUs (ignoring the *_max settings).. switching to a single CPU and this still happens but resets itself after a while to the *_max limits ...
Comment 8 Easywork Net 2022-05-03 07:21:53 UTC
HI
Comment 9 Mark Johnston freebsd_committer freebsd_triage 2022-06-13 12:47:29 UTC
I'm sorry that this didn't get any attention when it was submitted.  The panicking code has changed substantially since 9.2.  Please re-open if this bug still occurs on supported FreeBSD versions.