Collaboration bsnmpd + devd + using an empty CD-ROM drive = 100% CPU load The service dedv restart command helps to solve the problem. FreeBSD 11.0-STABLE #0 r314635: Sat Mar 4 02:12:23 EET 2017 The problem is relevant, from about May 2016 on to FreeBSD 10.3-STABLE. # top -bPS -n 10 last pid: 2660; load averages: 1.48, 0.93, 0.44 up 0+00:05:21 19:41:16 70 processes: 5 running, 64 sleeping, 1 waiting Mem: 622M Active, 13M Inact, 183M Wired, 1153M Free ARC: 88M Total, 30M MFU, 55M MRU, 16K Anon, 491K Header, 2526K Other Swap: 6144M Total, 6144M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2623 root 1 52 0 94176K 43176K RUN 1:40 56.49% bsnmpd 14 root 3 -8 - 0K 48K - 0:52 29.69% geom 12 root 25 -64 - 0K 400K WAIT 0:26 11.38% intr 1038 root 1 22 0 9532K 4800K select 0:08 3.76% devd 891 root 1 21 0 10480K 1892K select 0:04 1.95% syslogd 11 root 1 155 ki31 0K 16K RUN 1:46 0.00% idle 1553 mysql 24 52 0 763M 525M select 0:02 0.00% mysqld 0 root 274 -16 - 0K 4384K swapin 0:02 0.00% kernel 4 root 2 -16 - 0K 32K - 0:01 0.00% cam 31 root 1 -16 - 0K 16K - 0:00 0.00% racctd # egrep -v '^#|^$' /etc/snmpd.config location := "Room 200" contact := "sysmeister@example.com" system := 1 # FreeBSD read := "lkagh;rhstghhjd" NoAuthProtocol := 1.3.6.1.6.3.10.1.1.1 HMACMD5AuthProtocol := 1.3.6.1.6.3.10.1.1.2 HMACSHAAuthProtocol := 1.3.6.1.6.3.10.1.1.3 NoPrivProtocol := 1.3.6.1.6.3.10.1.2.1 DESPrivProtocol := 1.3.6.1.6.3.10.1.2.2 AesCfb128Protocol := 1.3.6.1.6.3.10.1.2.4 securityModelAny := 0 securityModelSNMPv1 := 1 securityModelSNMPv2c := 2 securityModelUSM := 3 MPmodelSNMPv1 := 0 MPmodelSNMPv2c := 1 MPmodelSNMPv3 := 3 noAuthNoPriv := 1 authNoPriv := 2 authPriv := 3 %snmpd begemotSnmpdDebugDumpPdus = 2 begemotSnmpdDebugSyslogPri = 7 begemotSnmpdCommunityString.0.1 = $(read) begemotSnmpdCommunityDisable = 1 begemotSnmpdPortStatus.0.0.0.0.161 = 1 begemotSnmpdLocalPortStatus."/var/run/snmpd.sock" = 1 begemotSnmpdLocalPortType."/var/run/snmpd.sock" = 4 sysContact = $(contact) sysLocation = $(location) sysObjectId = 1.3.6.1.4.1.12325.1.1.2.1.$(system) snmpEnableAuthenTraps = 2 begemotSnmpdModulePath."mibII" = "/usr/lib/snmp_mibII.so" begemotSnmpdModulePath."bridge" = "/usr/lib/snmp_bridge.so" begemotSnmpdModulePath."hostres" = "/usr/lib/snmp_hostres.so" begemotSnmpdModulePath."lm75" = "/usr/lib/snmp_lm75.so" begemotSnmpdModulePath."netgraph" = "/usr/lib/snmp_netgraph.so" begemotSnmpdModulePath."pf" = "/usr/lib/snmp_pf.so" %mibII begemotIfForcePoll = 2000 %netgraph begemotNgControlNodeName = "snmpd" begemotSnmpdModulePath."ucd" = "/usr/local/lib/snmp_ucd.so" %ucd updateInterval = 500 extCheckInterval = 100 extUpdateInterval = 3000 extTimeout = 60 memMinimumSwap = 1600 memSwapErrorMsg = "No free swap!"
When this reproduces, can you run 'procstat -kka | egrep "bsnmpd|devd"' and paste the output in this bug? Possibly run it a couple times and paste each output.
(In reply to Conrad Meyer from comment #1) Ok.
(In reply to Conrad Meyer from comment #1) # procstat -kka | egrep "bsnmpd|devd" 1047 100600 devd - mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x14 _cv_timedwait_sig_sbt+0x1d3 seltdwait+0xc3 kern_select+0x89f sys_select+0x54 amd64_syscall+0x50e Xfast_syscall+0xfb 2632 100566 bsnmpd - mi_switch+0xd2 sleepq_timedwait+0x42 _sleep+0x27b g_waitfor_event+0xf3 sysctl_kern_geom_confxml+0x39 sysctl_root_handler_locked+0xbf sysctl_root+0x1f8 userland_sysctl+0x1d0 sys___sysctl+0x5f amd64_syscall+0x50e Xfast_syscall+0xfb # procstat -kka | egrep "bsnmpd|devd" 1047 100600 devd - mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x14 _cv_timedwait_sig_sbt+0x1d3 seltdwait+0xc3 kern_select+0x89f sys_select+0x54 amd64_syscall+0x50e Xfast_syscall+0xfb 2632 100566 bsnmpd - mi_switch+0xd2 sleepq_wait+0x3a _sleep+0x29b cam_periph_runccb+0xcd cdprevent+0xa6 cdcheckmedia+0x22 cdopen+0x226 g_disk_access+0xc6 g_access+0x194 g_dev_open+0x116 devfs_open+0x11f VOP_OPEN_APV+0x84 vn_open_vnode+0x209 vn_open_cred+0x2f9 kern_openat+0x1f4 amd64_syscall+0x50e Xfast_syscall+0xfb # procstat -kka | egrep "bsnmpd|devd" 1047 100600 devd - mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x14 _cv_timedwait_sig_sbt+0x1d3 seltdwait+0xc3 kern_select+0x89f sys_select+0x54 amd64_syscall+0x50e Xfast_syscall+0xfb 2632 100566 bsnmpd - mi_switch+0xd2 sleepq_wait+0x3a _sleep+0x29b cam_periph_runccb+0xcd cdcheckmedia+0xe3 cdopen+0x226 g_disk_access+0xc6 g_access+0x194 g_dev_open+0x116 devfs_open+0x11f VOP_OPEN_APV+0x84 vn_open_vnode+0x209 vn_open_cred+0x2f9 kern_openat+0x1f4 amd64_syscall+0x50e Xfast_syscall+0xfb # procstat -kka | egrep "bsnmpd|devd" 1047 100600 devd - mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x14 _cv_timedwait_sig_sbt+0x1d3 seltdwait+0xc3 kern_select+0x89f sys_select+0x54 amd64_syscall+0x50e Xfast_syscall+0xfb 2632 100566 bsnmpd - mi_switch+0xd2 sleepq_wait+0x3a _sleep+0x29b cam_periph_runccb+0xcd cdprevent+0xa6 cdcheckmedia+0x19a cdopen+0x226 g_disk_access+0xc6 g_access+0x194 g_dev_open+0x116 devfs_open+0x11f VOP_OPEN_APV+0x84 vn_open_vnode+0x209 vn_open_cred+0x2f9 kern_openat+0x1f4 amd64_syscall+0x50e Xfast_syscall+0xfb # procstat -kka | egrep "bsnmpd|devd" 1047 100600 devd - mi_switch+0xd2 sleepq_catch_signals+0xb7 sleepq_timedwait_sig+0x14 _cv_timedwait_sig_sbt+0x1d3 seltdwait+0xc3 kern_select+0x89f sys_select+0x54 amd64_syscall+0x50e Xfast_syscall+0xfb 2632 100566 bsnmpd - mi_switch+0xd2 sleepq_wait+0x3a _sleep+0x29b cam_periph_runccb+0xcd cdprevent+0xa6 cdcheckmedia+0x22 cdopen+0x226 g_disk_access+0xc6 g_access+0x194 g_dev_open+0x116 devfs_open+0x11f VOP_OPEN_APV+0x84 vn_open_vnode+0x209 vn_open_cred+0x2f9 kern_openat+0x1f4 amd64_syscall+0x50e Xfast_syscall+0xfb
Thanks!
I am getting this on vultr.com/netcup.de seems to be happening only on "QEMU" since in AWS or GCE I don't see the error: Processing event '!system=CAM subsystem=periph type=error device=cd0 serial="QM00003" cam_status="0xcc" scsi_status=2 scsi_sense="70 02 3a 00" CDB="00 00 00 00 00 00 " Just in case I am using this kerne: https://github.com/fabrik-red/images/blob/master/fabrik.kernel
This should be fixed by now in stable branches. *** This bug has been marked as a duplicate of bug 215471 ***
Isn't the bug actually addressed in PR 209368? The currently linked bug is also a duplicate of PR 209368 (but still open for some reason, maybe it can be closed).
(In reply to Conrad Meyer from comment #7) Yes, all three of them seem to describe same problem.
Cool. *** This bug has been marked as a duplicate of bug 209368 ***