Bug 293470 - bhyve: VMs with NUMA configuration fail to start with certain devices
Summary: bhyve: VMs with NUMA configuration fail to start with certain devices
Status: Closed Works As Intended
Alias: None
Product: Base System
Classification: Unclassified
Component: bhyve (show other bugs)
Version: CURRENT
Hardware: Any Any
: --- Affects Only Me
Assignee: freebsd-virtualization (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2026-02-26 19:49 UTC by Roman Bogorodskiy
Modified: 2026-02-28 11:54 UTC (History)
3 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Roman Bogorodskiy freebsd_committer freebsd_triage 2026-02-26 19:49:46 UTC
Initial test:

bhyve -c 8 -n id=0,size=2048,cpus=0-3 -n id=1,size=2048,cpus=4-7 -m 4096 \
       -u -H -P \
       -s 0:0,hostbridge \
       -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
       -s 1:0,lpc -s 4:0,e1000,slirp,open \
       -s 5:0,virtio-blk,/data/img/fedora.img \
       -s 6:0,fbuf,tcp=127.0.0.1:5944 \
       -l com1,stdio fedora

This boots fine.

Just for the info, `numactl --hardware` shows the following:

available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3
node 0 size: 2013 MB
node 0 free: 1093 MB
node 1 cpus: 4 5 6 7
node 1 size: 1890 MB
node 1 free: 1035 MB
node distances:
node     0    1 
   0:   10   20 
   1:   20   10 

Configurations that do not boot:

1. Replacing e1000 with virtio-net.

bhyve prints:
fbuf frame buffer base: 0x1728d7200000 [sz 33554432]

I can connect with VNC, but only see a black screen.
When running `bhyvectl --destroy` on it, it prints:

vm_run error -1, errno 6
vm_run error -1, errno 6
vm_run error -1, errno 6
vm_run error -1, errno 6
vm_run error -1, errno 6
vm_run error -1, errno 6
vm_run error -1, errno 6

2. Same command, keeping e1000, but adding "-s 2:0,xhci,tablet" leads to the same result.

3. Same command, keeping e1000, but adding "-s 2:0,virtio-rnd" leads to the same result.

I'm running -CURRENT as of end of January. I didn't use this feature previously, so don't know if it worked previously with these devices.
Comment 1 Bojan Novković freebsd_committer freebsd_triage 2026-02-26 20:07:20 UTC
(In reply to Roman Bogorodskiy from comment #0)

Do you have a link to the fedora image you are using?

I tried booting a -CURRENT VM and a Debian 12 VM with all three cases you provided and I couldn't replicate the hangs you are seeing.
I'm on -CURRENT from a similar period ('080d8ed7dd29').
Comment 2 Roman Bogorodskiy freebsd_committer freebsd_triage 2026-02-26 20:15:49 UTC
(In reply to Bojan Novković from comment #1)

> I'm on -CURRENT from a similar period ('080d8ed7dd29').

Oh yeah, I'm actually still running your hotplug branch :)

I can reproduce the issue with the unmodified FreeBSD image also:

https://download.freebsd.org/releases/VM-IMAGES/15.0-RELEASE/amd64/Latest/FreeBSD-15.0-RELEASE-amd64-ufs.raw.xz

PS I tried similar scenarios on 15.0-RELEASE-p2 and wasn't able to reproduce the issue. This was a super quick test though, it uses a different topology (bhyve -c 4 -m 4096 -n id=0,size=2048,cpus=0-1) as I have less CPUs there, and uses a different image and also different PCI addresses. A bit later I'll do a proper test where only the host is different.
Comment 3 Roman Bogorodskiy freebsd_committer freebsd_triage 2026-02-28 11:54:57 UTC
(In reply to Bojan Novković from comment #1)

I've updated to a slightly fresher -CURRENT (20285cad7a55ecd0020f51f3daee74db8b1ea5a0; maybe just a couple of days difference from the original branch; cannot run newer versions because of the other unrelated issue) and it works for me.

So it looks like a possible regression in the hotplug branch, not in the mainline -CURRENT. Closing this PR, sorry for the false alert.