Bug 290997 - [vmm]: Dedicated GPU - PCI passthrough not supported yet? (Variable size IVHD type 0xf0 not supported)
Summary: [vmm]: Dedicated GPU - PCI passthrough not supported yet? (Variable size IVHD...
Status: Closed Not A Bug
Alias: None
Product: Base System
Classification: Unclassified
Component: bhyve (show other bugs)
Version: 15.0-STABLE
Hardware: amd64 Any
: --- Affects Only Me
Assignee: freebsd-virtualization (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2025-11-13 13:20 UTC by Nils Beyer
Modified: 2025-12-01 11:34 UTC (History)
6 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Nils Beyer 2025-11-13 13:20:09 UTC
Hi,

trying to pass through an Intel GPU card on a AMD Ryzen 8700G system. VMM loads so far, but "vm-bhyve" complains that "pci passthrough not supported on this system (no VT-d or amdvi)".

And it seems to be right - looking at the dmesg:

----------------------------- SNIP -----------------------------
ppt0 mem 0xf4000000-0xf4ffffff,0xf800000000-0xfbffffffff at device 0.0 on pci3
amdviiommu0 at device 0.2 on pci0
AMD-Vi: IVRS Info VAsize = 64 PAsize = 48 GVAsize = 2 flags:0
ivhd0: <AMD-Vi/IOMMU ivhd in mixed format> on acpi0
ivhd0: Unknown dev entry:0xf0
Variable size IVHD type 0xf0 not supported
ivhd0: Flag:30<IotlbSup,Coherent>
ivhd0: Features(type:0x40) MsiNumPPR = 0 PNBanks= 2 PNCounters= 4
ivhd0: Extended features[31:0]:a2254afa<PPRSup,NXSup,GTSup,<b5>,IASup,GASup,PCSup> HATS = 0x2 GATS = 0x0 GLXSup = 0x1 SmiFSup = 0x1 SmiFRC = 0x1 GAMSup = 0x1 DualPortLogSup = 0x2 DualEventLogSup = 0x2
ivhd0: Extended features[62:32]:246577ef<USSup,PprOvrflwEarlySup,PPRAutoRspSup,BlKStopMrkSup,PerfOptSup,MsiCapMmioSup,GIOSup,EPHSup,InvIotlbSup> Max PASID: 0x2f DevTblSegSup = 0x3 MarcSup = 0x1
ivhd0: supported paging level:7, will use only: 4
ivhd0: device [0x3 - 0xfffe] config:0
ivhd0: device [0xff00 - 0xffff] config:0
ivhd0: PCI cap 0x190b640f@0x40 feature:19<IOTLB,EFR,CapExt>
----------------------------- SNIP -----------------------------


sysctl hw.vmm:
----------------------------- SNIP -----------------------------
#sysctl hw.vmm
hw.vmm.amdvi.domain_id: 0
hw.vmm.amdvi.disable_io_fault: 0
hw.vmm.amdvi.ptp_level: 4
hw.vmm.amdvi.host_ptp: 1
hw.vmm.amdvi.enable: 0
hw.vmm.amdvi.count: 1
hw.vmm.npt.pmap_flags: 508
hw.vmm.svm.num_asids: 32768
hw.vmm.svm.disable_npf_assist: 0
hw.vmm.svm.features: 515849471
hw.vmm.svm.vmcb_clean: 1023
hw.vmm.vmx.l1d_flush_sw: 0
hw.vmm.vmx.l1d_flush: 0
hw.vmm.vmx.vpid_alloc_failed: 0
hw.vmm.vmx.posted_interrupt_vector: -1
hw.vmm.vmx.cap.posted_interrupts: 0
hw.vmm.vmx.cap.virtual_interrupt_delivery: 0
hw.vmm.vmx.cap.tpr_shadowing: 0
hw.vmm.vmx.cap.invpcid: 0
hw.vmm.vmx.cap.monitor_trap: 0
hw.vmm.vmx.cap.unrestricted_guest: 0
hw.vmm.vmx.cap.rdtscp: 0
hw.vmm.vmx.cap.rdpid: 0
hw.vmm.vmx.cap.wbinvd_exit: 0
hw.vmm.vmx.cap.pause_exit: 0
hw.vmm.vmx.cap.halt_exit: 0
hw.vmm.vmx.initialized: 0
hw.vmm.vmx.cr4_zeros_mask: 0
hw.vmm.vmx.cr4_ones_mask: 0
hw.vmm.vmx.cr0_zeros_mask: 0
hw.vmm.vmx.cr0_ones_mask: 0
hw.vmm.vmx.no_flush_rsb: 0
hw.vmm.ept.pmap_flags: 0
hw.vmm.vrtc.flag_broken_time: 1
hw.vmm.ppt.devices: 1
hw.vmm.iommu.enable: 1
hw.vmm.iommu.initialized: 0
hw.vmm.bhyve_xcpuids: 9
hw.vmm.topology.cpuid_leaf_b: 1
hw.vmm.create: 
hw.vmm.destroy: 
hw.vmm.maxcpu: 16
hw.vmm.trap_wbinvd: 0
hw.vmm.trace_guest_exceptions: 0
hw.vmm.ipinum: 252
hw.vmm.halt_detection: 1
----------------------------- SNIP -----------------------------

Is that a known problem?



Thanks and regards,
Nils
Comment 1 Bjoern A. Zeeb freebsd_committer freebsd_triage 2025-11-13 23:44:07 UTC
I have this in my loader.conf:

# Turn on passthru support on AMD;
# see bottom of https://wiki.freebsd.org/bhyve/pci_passthru
hw.vmm.amdvi.enable=1

Does this help?

If you set it manually you can reload vmm.ko to test acording to the wiki page.
Comment 2 Nils Beyer 2025-11-14 13:59:07 UTC
(In reply to Bjoern A. Zeeb from comment #1)

I've set "hw.vmm.amdvi.enable=1" in "/boot/loader.conf" and rebooted - didn't try the kenv variant just to be sure.

Now I get a "signal 4" abort:
------------------------------------ SNIP ------------------------------------
Nov 14 14:02:50: initialising
Nov 14 14:02:50:  [loader: grub]
Nov 14 14:02:50:  [cpu: 4]
Nov 14 14:02:50:  [memory: 8G]
Nov 14 14:02:50:  [hostbridge: standard]
Nov 14 14:02:50:  [com ports: com1]
Nov 14 14:02:50:  [uuid: ca7da59b-c08b-11f0-9b20-a0369fc3af60]
Nov 14 14:02:50:  [debug mode: yes]
Nov 14 14:02:50:  [primary disk: disk0.img]
Nov 14 14:02:50:  [primary disk dev: file]
Nov 14 14:02:50: initialising network device tap0
Nov 14 14:02:50: adding tap0 -> vm-public (public addm)
Nov 14 14:02:50: bring up tap0 -> vm-public (public addm)
Nov 14 14:02:50: booting
Nov 14 14:02:50: create file /mnt/vms/llm-test/device.map
Nov 14 14:02:50:  -> (cd0) /mnt/vms/.iso/debian-13.1.0-amd64-netinst.iso
Nov 14 14:02:50:  -> (hd0) /mnt/vms/llm-test/disk0.img
Nov 14 14:02:50: /usr/local/sbin/grub-bhyve -c /dev/nmdm-llm-test.1A -S -m /mnt/vms/llm-test/device.map -M 8G -r cd0 llm-test
Nov 14 14:02:51:  [bhyve options: -c 4 -m 8G -AHPw -S -U ca7da59b-c08b-11f0-9b20-a0369fc3af60 -u -S]
Nov 14 14:02:51:  [bhyve devices: -s 0,hostbridge -s 31,lpc -s 4:0,ahci-hd,/mnt/vms/llm-test/disk0.img -s 5:0,virtio-net,tap0,mac=58:9c:fc:0a:14:b6 -s 6:0,passthru,3/0/0]
Nov 14 14:02:51:  [bhyve console: -l com1,/dev/nmdm-llm-test.1A]
Nov 14 14:02:51:  [bhyve iso device: -s 3:0,ahci-cd,/mnt/vms/.iso/debian-13.1.0-amd64-netinst.iso,ro]
Nov 14 14:02:51: starting bhyve (run 1)
Nov 14 14:02:51: bhyve exited with status 4
Nov 14 14:02:51: destroying network device tap0
Nov 14 14:02:51: stopped
------------------------------------ SNIP ------------------------------------

Without the "passthru"-option the VM starts fine.

Do you need a coredump? How can I create one that is helpful - any special sysctls?
Comment 3 Mark Johnston freebsd_committer freebsd_triage 2025-11-14 17:05:52 UTC
(In reply to Nils Beyer from comment #2)
Is it possible to see the error message that bhyve is presumably printing before it exits?  Does anything appear in the dmesg after the error occurs?
Comment 4 Nils Beyer 2025-11-18 14:03:42 UTC
(In reply to Mark Johnston from comment #3)

dmesg shows nothing extraordinary:
----------------------------- SNIP -----------------------------
AMD-Vi: IVRS Info VAsize = 64 PAsize = 48 GVAsize = 2 flags:0
ivhd: ivhd0 already exists; skipping it
ivhd0: <AMD-Vi/IOMMU ivhd in mixed format> on acpi0
ivhd0: Unknown dev entry:0xf0
Variable size IVHD type 0xf0 not supported
ivhd0: Flag:30<IotlbSup,Coherent>
ivhd0: Features(type:0x40) MsiNumPPR = 0 PNBanks= 2 PNCounters= 4
ivhd0: Extended features[31:0]:a2254afa<PPRSup,NXSup,GTSup,<b5>,IASup,GASup,PCSup> HATS = 0x2 GATS = 0x0 GLXSup = 0x1 SmiFSup = 0x1 SmiFRC = 0x1 GAMSup = 0x1 DualPortLogSup = 0x2 DualEventLogSup = 0x2
ivhd0: Extended features[62:32]:246577ef<USSup,PprOvrflwEarlySup,PPRAutoRspSup,BlKStopMrkSup,PerfOptSup,MsiCapMmioSup,GIOSup,EPHSup,InvIotlbSup> Max PASID: 0x2f DevTblSegSup = 0x3 MarcSup = 0x1
ivhd0: supported paging level:7, will use only: 4
ivhd0: device [0x3 - 0xfffe] config:0
ivhd0: device [0xff00 - 0xffff] config:0
ivhd0: PCI cap 0x190b640f@0x40 feature:19<IOTLB,EFR,CapExt>
amdviiommu0: attempting to allocate 1 MSI vectors (4 supported)
msi: routing MSI IRQ 75 to local APIC 4 vector 51
amdviiommu0: using IRQ 75 for MSI
vgapci0: detached
ppt0 mem 0xf4000000-0xf4ffffff,0xf800000000-0xfbffffffff at device 0.0 on pci3
ppt0: attached
Nov 18 14:54:12 asbach pulseaudio[5512]: [] module-x11-xsmp.c: Failed to open connection to session manager: None of the authentication protocols specified are supported
Nov 18 14:54:12 asbach pulseaudio[5512]: [] module.c: Failed to load module "module-x11-xsmp" (argument: "display=:1 xauthority=/tmp/xauth_VdxVIG session_manager=local/asbach.renzel.net:/tmp/.ICE-unix/5501"): initialization failed.
Nov 18 14:54:36 asbach su[5816]: nbe to root on /dev/pts/12
tap0: bpf attached
tap0: Ethernet address: 58:9c:fc:10:7a:30
tap0: promiscuous mode enabled
tap0: link state changed to UP
tap0: link state changed to DOWN
Nov 18 14:55:26 asbach su[6741]: nbe to root on /dev/pts/9
tap0: bpf attached
tap0: Ethernet address: 58:9c:fc:10:7a:30
tap0: promiscuous mode enabled
tap0: link state changed to UP
tap0: link state changed to DOWN
tap0: bpf attached
tap0: Ethernet address: 58:9c:fc:10:7a:30
tap0: promiscuous mode enabled
tap0: link state changed to UP
tap0: link state changed to DOWN
----------------------------- SNIP -----------------------------


pciconf looks good, too:
----------------------------- SNIP -----------------------------
#pciconf -l | grep ppt
ppt0@pci0:3:0:0:        class=0x030000 rev=0x00 hdr=0x00 vendor=0x8086 device=0xe20b subvendor=0x172f subdevice=0x0100
----------------------------- SNIP -----------------------------


But bhyve.log indeed says:
----------------------------- SNIP -----------------------------
bhyve: Warning: Unable to reuse host address of Graphics Stolen Memory. GPU passthrough might not work properly.
bhyve: gvt_d_setup_gsm: Unknown IGD device. It's not supported yet!: No such file or directory
bhyve: gvt_d_init: Unable to setup Graphics Stolen Memory
Device emulation initialization error: No such file or directory
----------------------------- SNIP -----------------------------


Resizable BAR is activated in UEFI - this Intel ARC GPU card needs it. SRV-IO is activated  as well. As a counter example: when I passthrough my onboard-NIC, bhyve works as expected.

So it probably has something to do because of a strange way the Intel GPU works with PCIe and its memory access...
Comment 5 Corvin Köhne freebsd_committer freebsd_triage 2025-11-19 06:46:38 UTC
(In reply to Nils Beyer from comment #4)
> ----------------------------- SNIP -----------------------------
> bhyve: Warning: Unable to reuse host address of Graphics Stolen Memory. GPU passthrough might not work properly.
> bhyve: gvt_d_setup_gsm: Unknown IGD device. It's not supported yet!: No such file or directory
> bhyve: gvt_d_init: Unable to setup Graphics Stolen Memory
> Device emulation initialization error: No such file or directory

First of all, GPU passthrough is not supported by bhyve. It's an experimental feature.

So far, bhyve only handles GPU passthrough for Intels integrated graphic devices. Therefore, it uses a list of device IDs [1] to determine which quirks to apply to a given GPU. Your Intel Arc GPU isn't in this list, so it errors out with "Unknown IGD device".

[1] https://github.com/freebsd/freebsd-src/blob/9b0102837e305ca75de2bc14d284f786a33f9a6a/usr.sbin/bhyve/amd64/pci_gvt-d.c#L155

> Resizable BAR is activated in UEFI - this Intel ARC GPU card needs it. SRV-IO is activated  as well. As a counter example: when I passthrough my onboard-NIC, bhyve works as expected.

bhyve doesn't support resizable BAR. IIRC, it can cause issues when activated for an passthrough device.
Comment 6 Mark Johnston freebsd_committer freebsd_triage 2025-11-21 17:13:38 UTC
(In reply to Corvin Köhne from comment #5)
Do you have any plan to support this GPU?  Is it possible that it'll just work if one adds the GPU to the igd_devices list in usr.sbin/bhyve/amd64/pci_gvt-d.c?

> bhyve doesn't support resizable BAR. IIRC, it can cause issues when activated for an passthrough device.

AFAIU, it's the kernel itself that needs to support resizeable BARs.  bhyve just uses /dev/pci to map them, is that right?
Comment 7 Nils Beyer 2025-11-21 17:23:01 UTC
(In reply to Mark Johnston from comment #6)

okay, I've tried that in a dumb way:
----------------------------- SNIP -----------------------------
diff --git a/usr.sbin/bhyve/amd64/pci_gvt-d.c b/usr.sbin/bhyve/amd64/pci_gvt-d.c
index 0ea53689f2b2..7d058f4b51a1 100644
--- a/usr.sbin/bhyve/amd64/pci_gvt-d.c
+++ b/usr.sbin/bhyve/amd64/pci_gvt-d.c
@@ -187,6 +187,7 @@ static const struct igd_device igd_devices[] = {
        INTEL_RPLS_IDS(IGD_DEVICE, &igd_ops_gen11),
        INTEL_RPLU_IDS(IGD_DEVICE, &igd_ops_gen11),
        INTEL_RPLP_IDS(IGD_DEVICE, &igd_ops_gen11),
+       INTEL_BMG_IDS(IGD_DEVICE, &igd_ops_gen11),
 };
 
 static const struct igd_ops *
----------------------------- SNIP -----------------------------


Didn't work - status 4 still, other messages this time though:
----------------------------- SNIP -----------------------------
bhyve: Warning: Unable to reuse host address of Graphics Stolen Memory. GPU passthrough might not work properly.
bhyve: gvt_d_setup_opregion: Invalid OpRegion signature
bhyve: gvt_d_init: Unable to setup OpRegion
Device emulation initialization error: Invalid argument
----------------------------- SNIP -----------------------------


If there is anything I can try or if you have any experimental code, I'm more than willingly to try that...
Comment 8 Jonathan Vasquez 2025-11-22 17:48:24 UTC
For the record, I've been using GPU passthrough for a few months now and it's been pretty solid over all. There are some qwirks but once it's running, it's stable and I haven't had any crashes. My bhyve gaming VM just keeps running :). I've documented this at the below link and have uploaded multiple videos on my youtube channel showing the performance:

and Corvin is right regarding lack of resizable bar support. That was the main culprit of what prevented my card from being used. Once I turned it off it worked. Although I believe I'm bottlenecked atm due to it but I can still play Cyberpunk 2077 at 40 fps with maximum settings (the card can handle more so it's not a card limitation), and it goes up to 80 fps with frame generation enabled.

Blog Post
- https://xyinn.org/blog/freebsd/freebsd_bhyve_gpu_passthrough_amd

GPU Passthrough On FreeBSD 14.3-RELEASE - Gaming in a Virtual Machine - Overview Demo
- https://www.youtube.com/watch?v=Ob4-v7dTJGs

GPU Passthrough On FreeBSD 14.3-RELEASE - Gaming in a Virtual Machine - Performance Demo
https://www.youtube.com/watch?v=_cz0RUAw5p8

Virtual Machine Gaming - Cyberpunk 2077 - #1
https://www.youtube.com/watch?v=pgbms-c-cGI
Comment 9 mario felicioni 2025-11-22 19:17:40 UTC
Ehy bro.

Do you know that you can compile Cyberpunk 2077 natively on FreeBSD so that you can play directly on it at a native speed ? Well,now you know.
Comment 10 Corvin Köhne freebsd_committer freebsd_triage 2025-11-24 07:21:59 UTC
(In reply to Corvin Köhne from comment #5)
> Do you have any plan to support this GPU?  Is it possible that it'll just work > if one adds the GPU to the igd_devices list in usr.sbin/bhyve/amd64/pci_gvt-d.c?

I don't have that hardware, so I can't test and work on it. Adding it to the list of igd_devices won't work because it then tries to apply igd related quirks. Those quirks are highly platform dependend and I doubt that those are required for dedicated GPUs because they have to work platform independend. It would be worth trying what happens when disabling those quirks and simply passing through the GPU. So maybe someone can try to always return ENXIO on gvt_d_probe:

static int
gvt_d_probe(struct pci_devinst *const pi)
{
	return (ENXIO);
}
Comment 11 mario felicioni 2025-11-24 07:25:59 UTC
(In reply to mario felicioni from comment #9)

Ignore this comment. It's not that game which I was referring to.
Comment 12 Nils Beyer 2025-11-24 12:19:33 UTC
(In reply to Corvin Köhne from comment #10)

okay, I've added that and bhyve actually runs now while the Intel GPU card is passed through.

Unfortunately, within the VM the card does not function properly:
----------------------------- SNIP -----------------------------
[    0.000000] DMI: FreeBSD BHYVE/BHYVE, BIOS 14.0 10/17/2021
(...)
[    4.243973] xe 0000:00:06.0: [drm] Found BATTLEMAGE (device ID e20b) display version 14.01 stepping B0
[    4.245823] xe 0000:00:06.0: [drm] Using GuC firmware from xe/bmg_guc_70.bin version 70.40.2
[    4.250205] xe 0000:00:06.0: [drm] *ERROR* GT0: load failed: status = 0x40000056, time = 0ms, freq = 2150MHz (req 2133MHz), done = -1
[    4.250236] xe 0000:00:06.0: [drm] *ERROR* GT0: load failed: status: Reset = 0, BootROM = 0x2B, UKernel = 0x00, MIA = 0x00, Auth = 0x01
[    4.250257] xe 0000:00:06.0: [drm] *ERROR* GT0: firmware production part check failure
[    4.250273] xe 0000:00:06.0: [drm] *ERROR* CRITICAL: Xe has declared device 0000:00:06.0 as wedged.
               IOCTLs and executions are blocked. Only a rebind may clear the failure
               Please file a _new_ bug report at https://gitlab.freedesktop.org/drm/xe/kernel/issues/new
[    4.312126] xe 0000:00:06.0: [drm] *ERROR* GT0: GuC mmio request 0x4100: no reply 0x4100
[    4.312152] xe 0000:00:06.0: probe with driver xe failed with error -110
----------------------------- SNIP -----------------------------

I think that the failed mmio request has something to do with the missing BAR resize function, correct?
Comment 13 Corvin Köhne freebsd_committer freebsd_triage 2025-11-25 06:39:50 UTC
(In reply to Nils Beyer from comment #12)
As mentioned earlier resizable BARs are not supported, so you should disable them in your Host BIOS. Additionally, it would be worth trying to pass the GPU ROM to the guest. Therefore, you have to extract it on a Linux or Windows system, e.g. http://etherboot.org/wiki/romdumping, and then make use of the rom parameter from passthru.
Comment 14 Nils Beyer 2025-12-01 11:33:44 UTC
Okay, thanks for all the hints and tips so far what to try but I've given up on that card. It even is a hassle to get its passthrough working under Linux. And its performance in LLM inferencing is not satisfying for my use case.

I close that bug report now as it is no bug per se.

Again, thanks for all of your help so far, guys...