Trying to run influxdb within a jail results in an immediate crash of go: [...] Aug 16 14:00:51 graphite-kip influxd[86507]: runtime: kevent failed with 78 Aug 16 14:00:51 graphite-kip influxd[86507]: fatal error: runtime: kevent failed Aug 16 14:00:51 graphite-kip influxd[86507]: Aug 16 14:00:51 graphite-kip influxd[86507]: goroutine 1 [running, locked to thread]: Aug 16 14:00:51 graphite-kip influxd[86507]: runtime.throw({0x1451584?, 0xc0003debb0?}) Aug 16 14:00:51 graphite-kip influxd[86507]: runtime/panic.go:1047 +0x5d fp=0xc0003deb58 sp=0xc0003deb28 pc=0x438dfd Aug 16 14:00:51 graphite-kip influxd[86507]: runtime.netpollinit() [...] The host hosting jails is a recent XigmaNAS 13.1.0.5 (BETA, but base FreeBSD is 13.1-RELEASE), the jail itself is based upon 13.1-RELEASE. I faced the very same with jail's FreeBSD 12.3 and XigmaNAS 12.3. The packages installed are taken from either FreeBSD's "latest" repository (12.3-RELEASE and 13.1-RELEASE) as well as "home brewn" poudriere repository. Since influxdb does run on a 14-CURRENT machine with traditional "make" made software packages (without jail), I assume there is an issue with the jail environment.
Anything new?
By the way, I checked influxdb on a real bae-metal host running FBSD 13.1-RELENG-p1 with the minimum config (enabling http and enable bind ) which does not work either.
Host: FreeBSD 13.1-RELEASE-p1 releng/13.1-n250155-514a191356c1 amd64 [...] :root $ influx runtime: kevent failed with 78 fatal error: runtime: kevent failed goroutine 1 [running, locked to thread]: runtime.throw({0xfbdc4c?, 0xc0003dece0?}) runtime/panic.go:1047 +0x5d fp=0xc0003dec88 sp=0xc0003dec58 pc=0x437ddd runtime.netpollinit() runtime/netpoll_kqueue.go:44 +0x165 fp=0xc0003ded10 sp=0xc0003dec88 pc=0x434265 runtime.netpollGenericInit() runtime/netpoll.go:196 +0x3a fp=0xc0003ded28 sp=0xc0003ded10 pc=0x43387a runtime.doaddtimer(0xc000052a00, 0xc0001ba9b8) runtime/time.go:293 +0x30 fp=0xc0003ded80 sp=0xc0003ded28 pc=0x4568d0 runtime.addtimer(0xc0001ba9b8) runtime/time.go:279 +0xae fp=0xc0003dedc8 sp=0xc0003ded80 pc=0x4567ee time.startTimer(0xdf8475800?) runtime/time.go:215 +0x19 fp=0xc0003dede0 sp=0xc0003dedc8 pc=0x465259 time.AfterFunc(0x712b93?, 0x1c9fd38) time/sleep.go:171 +0x88 fp=0xc0003dee10 sp=0xc0003dede0 pc=0x4c4428 crypto/rand.(*reader).Read(0xc0001b80a0, {0xc000193778, 0x8, 0x8}) crypto/rand/rand_unix.go:53 +0xb1 fp=0xc0003deeb0 sp=0xc0003dee10 pc=0x583e91 io.ReadAtLeast({0x1dc3b40, 0xc0001b80a0}, {0xc000193778, 0x8, 0x8}, 0x8) io/io.go:332 +0x9a fp=0xc0003deef8 sp=0xc0003deeb0 pc=0x4aebfa io.ReadFull(...) io/io.go:351 encoding/binary.Read({0x1dc3b40, 0xc0001b80a0}, {0x1ddaa20, 0x2921708}, {0xdcd060?, 0xc000193770}) encoding/binary/binary.go:233 +0xc8 fp=0xc0003df078 sp=0xc0003deef8 pc=0x506ac8 go.opencensus.io/trace.init.1() go.opencensus.io@v0.22.2/trace/trace.go:543 +0x11b fp=0xc0003df140 sp=0xc0003df078 pc=0xaa5b9b runtime.doInit(0x2470fc0) runtime/proc.go:6321 +0x118 fp=0xc0003df270 sp=0xc0003df140 pc=0x447678 runtime.doInit(0x2472a80) runtime/proc.go:6298 +0x71 fp=0xc0003df3a0 sp=0xc0003df270 pc=0x4475d1 runtime.doInit(0x2471b80) runtime/proc.go:6298 +0x71 fp=0xc0003df4d0 sp=0xc0003df3a0 pc=0x4475d1 runtime.doInit(0x246e940) runtime/proc.go:6298 +0x71 fp=0xc0003df600 sp=0xc0003df4d0 pc=0x4475d1 runtime.doInit(0x2472ee0) runtime/proc.go:6298 +0x71 fp=0xc0003df730 sp=0xc0003df600 pc=0x4475d1 runtime.doInit(0x24701c0) runtime/proc.go:6298 +0x71 fp=0xc0003df860 sp=0xc0003df730 pc=0x4475d1 runtime.doInit(0x2478360) runtime/proc.go:6298 +0x71 fp=0xc0003df990 sp=0xc0003df860 pc=0x4475d1 runtime.doInit(0x2471f40) runtime/proc.go:6298 +0x71 fp=0xc0003dfac0 sp=0xc0003df990 pc=0x4475d1 runtime.doInit(0x247a0a0) runtime/proc.go:6298 +0x71 fp=0xc0003dfbf0 sp=0xc0003dfac0 pc=0x4475d1 runtime.doInit(0x246ab20) runtime/proc.go:6298 +0x71 fp=0xc0003dfd20 sp=0xc0003dfbf0 pc=0x4475d1 runtime.doInit(0x2477dc0) runtime/proc.go:6298 +0x71 fp=0xc0003dfe50 sp=0xc0003dfd20 pc=0x4475d1 runtime.doInit(0x246de00) runtime/proc.go:6298 +0x71 fp=0xc0003dff80 sp=0xc0003dfe50 pc=0x4475d1 runtime.main() runtime/proc.go:233 +0x1b9 fp=0xc0003dffe0 sp=0xc0003dff80 pc=0x43a599 runtime.goexit() runtime/asm_amd64.s:1594 +0x1 fp=0xc0003dffe8 sp=0xc0003dffe0 pc=0x468621 goroutine 2 [force gc (idle)]: runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:363 +0xd6 fp=0xc00006efb0 sp=0xc00006ef90 pc=0x43a996 runtime.goparkunlock(...) runtime/proc.go:369 runtime.forcegchelper() runtime/proc.go:302 +0xa5 fp=0xc00006efe0 sp=0xc00006efb0 pc=0x43a825 runtime.goexit() runtime/asm_amd64.s:1594 +0x1 fp=0xc00006efe8 sp=0xc00006efe0 pc=0x468621 created by runtime.init.6 runtime/proc.go:290 +0x25
Hello?
(In reply to O. Hartmann from comment #4) Are you running a kernel without COMPAT_FREEBSD11?
Yes.
But I can only speak of those hosts on which I control kernel builds, I can not speak for XigmaNAS.
(In reply to Mikael Urankar from comment #5) Checked out the kernels of all boxes I have access to and restored FBSD-11 compatibility. Now, on the hosts in question, the influxd daemon starts without problems. Thank you very much. XigmaNAS boxes are leftover, either I'm mistaken and picked up a wrong box, or, in case that's the project's intention, I'll check with XigmaNAS upstream there. Anyway, thank you very much. Regards oh
(In reply to O. Hartmann from comment #8) FYI there is a opened PR upstream to remove the need of COMPAT_FREEBSD11, it'll maybe appear in go 1.20 https://go-review.googlesource.com/c/go/+/413174