Bug 197174 - panic: vm_radix_insert: key 473 is already present
Summary: panic: vm_radix_insert: key 473 is already present
Status: Closed Overcome By Events
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.0-RELEASE
Hardware: amd64 Any
: --- Affects Only Me
Assignee: freebsd-bugs (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2015-01-29 14:44 UTC by Vick Khera
Modified: 2019-05-08 13:54 UTC (History)
0 users

See Also:


Attachments
dmesg.boot from d06 server (18.58 KB, text/plain)
2015-01-29 14:44 UTC, Vick Khera
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Vick Khera 2015-01-29 14:44:22 UTC
Created attachment 152336 [details]
dmesg.boot from d06 server

Recently two identical boxes I have started issuing the panic:

 panic: vm_radix_insert: key 473 is already present

after which the entire system is locked up. The serial console is unresponsive and my only option is to power cycle the system.

The serial console emits the following at the time of panic. All four panics (two on each box so far) have been identical other than the key and CPU number identified in the initial panic line.  The dmesg from one of the boxes is attached. The first such panic occurred after 300 days of uptime.  The sole purpose of this machine is running Postgres 9.2 server. The other things it does is run NRPE for nagios monitoring and
slony1 for database replication. The database lives on a ZFS mirror file system.

I'm pretty sure it has something to do with the load on the server since only the active system will panic; the twin box is just a backup. I swapped roles after the second panic on the primary system to see if it was hardware related. I clearly seems software related as the other system started to panic once it was the master.

I posted about this a week ago but got no response. Since then I have had one more panic.  https://lists.freebsd.org/pipermail/freebsd-questions/2015-January/263732.html

I upgraded my original machine to FreeBSD 10.1 and Postgres 9.4, and will keep an eye out for panics with that configuration as well.

Here are the panic lines recorded on the serial console:


Jan 15 00:21:51 ts-prv src_dev_log@ts Buffering: S9.d05 [panic: vm_radix_insert: key 46f is already present  cpuid = 9  KDB: stack backtrace:  db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe3fcf1b1820  ]
Jan 15 00:21:51 ts-prv src_dev_log@ts Buffering: S9.d05 [kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe3fcf1b18d0  panic() at panic+0x155/frame 0xfffffe3fcf1b1950  ]
Jan 15 00:21:51 ts-prv src_dev_log@ts Buffering: S9.d05 [vm_radix_insert() at vm_radix_insert+0x2ed/frame 0xfffffe3fcf1b19b0  vm_page_cache() at vm_page_cache+0x121/frame 0xfffffe3fcf1b19f0  ]
Jan 15 00:21:51 ts-prv src_dev_log@ts Buffering: S9.d05 [vm_pageout() at vm_pageout+0x8f7/frame 0xfffffe3fcf1b1a70  fork_exit() at fork_exit+0x9a/frame 0xfffffe3fcf1b1ab0  ]
Jan 15 00:21:52 ts-prv src_dev_log@ts Buffering: S9.d05 [fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe3fcf1b1ab0  --- trap 0, rip = 0, rsp = 0xfffffe3fcf1b1b70, rbp = 0 ---  ]

Jan 17 01:13:44 ts-prv src_dev_log@ts Buffering: S9.d05 [panic: vm_radix_insert: key 46f is already present  cpuid = 0  KDB: stack backtrace:  db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe3fcf1b1820  ]
Jan 17 01:13:44 ts-prv src_dev_log@ts Buffering: S9.d05 [kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe3fcf1b18d0  panic() at panic+0x155/frame 0xfffffe3fcf1b1950  ]
Jan 17 01:13:44 ts-prv src_dev_log@ts Buffering: S9.d05 [vm_radix_insert() at vm_radix_insert+0x2ed/frame 0xfffffe3fcf1b19b0  vm_page_cache() at vm_page_cache+0x121/frame 0xfffffe3fcf1b19f0  ]
Jan 17 01:13:44 ts-prv src_dev_log@ts Buffering: S9.d05 [vm_pageout() at vm_pageout+0x8f7/frame 0xfffffe3fcf1b1a70  fork_exit() at fork_exit+0x9a/frame 0xfffffe3fcf1b1ab0  ]
Jan 17 01:13:44 ts-prv src_dev_log@ts Buffering: S9.d05 [fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe3fcf1b1ab0  --- trap 0, rip = 0, rsp = 0xfffffe3fcf1b1b70, rbp = 0 ---  ]

from the second machine:

Jan 21 23:15:51 ts-prv src_dev_log@ts Buffering: S13.d06 [panic: vm_radix_insert: key 473 is already present  cpuid = 11  KDB: stack backtrace:  db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/fra]
Jan 21 23:15:51 ts-prv src_dev_log@ts Buffering: S13.d06 [me 0xfffffe3fcf1b2820  kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe3fcf1b28d0  panic() at panic+0x155/frame 0xfffffe3fcf1b2950  ]
Jan 21 23:15:51 ts-prv src_dev_log@ts Buffering: S13.d06 [vm_radix_insert() at vm_radix_insert+0x2ed/frame 0xfffffe3fcf1b29b0  vm_page_cache() at vm_page_cache+0x121/frame 0xfffffe3fcf1b29f0  ]
Jan 21 23:15:51 ts-prv src_dev_log@ts Buffering: S13.d06 [vm_pageout() at vm_pageout+0x8f7/frame 0xfffffe3fcf1b2a70  fork_exit() at fork_exit+0x9a/frame 0xfffffe3fcf1b2ab0  ]
Jan 21 23:15:51 ts-prv src_dev_log@ts Buffering: S13.d06 [fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe3fcf1b2ab0  --- trap 0, rip = 0, rsp = 0xfffffe3fcf1b2b70, rbp = 0 ---  ]


Jan 29 11:24:12 ts-prv src_dev_log@ts Buffering: S13.d06 [[2-1] FATAL:  remaining connection slots are reserved for non-replication superuser connections  panic: vm_radix_insert: key 46f is already present  cpuid = 0  ]
Jan 29 11:24:12 ts-prv src_dev_log@ts Buffering: S13.d06 [KDB: stack backtrace:  db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe3fcf1b2820  ]
Jan 29 11:24:12 ts-prv src_dev_log@ts Buffering: S13.d06 [kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe3fcf1b28d0  panic() at panic+0x155/frame 0xfffffe3fcf1b2950  ]
Jan 29 11:24:12 ts-prv src_dev_log@ts Buffering: S13.d06 [vm_radix_insert() at vm_radix_insert+0x2ed/frame 0xfffffe3fcf1b29b0  vm_page_cache() at vm_page_cache+0x121/frame 0xfffffe3fcf1b29f0  ]
Jan 29 11:24:12 ts-prv src_dev_log@ts Buffering: S13.d06 [vm_pageout() at vm_pageout+0x8f7/frame 0xfffffe3fcf1b2a70  fork_exit() at fork_exit+0x9a/frame 0xfffffe3fcf1b2ab0  ]
Jan 29 11:24:12 ts-prv src_dev_log@ts Buffering: S13.d06 [fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe3fcf1b2ab0  --- trap 0, rip = 0, rsp = 0xfffffe3fcf1b2b70, rbp = 0 ---  ]


On this last one, it appears that postgres reached its connection limit at the time of the panic.
Comment 1 Vick Khera 2015-05-27 15:17:05 UTC
Since the time of this original bug report, I have upgraded both Postgres (to 9.4) and FreeBSD (to 10.1). Since these upgrades, there have been no similar panics reported.

However, given that the prior software configuration was running for over a year non-stop, I cannot rule out that there is not going to be another panic like it.
Comment 2 Andriy Gapon freebsd_committer freebsd_triage 2019-05-08 13:54:24 UTC
No activity for a long time.
The problem has either been fixed or needs a new report.