Hi! First thanks for your work on this port. Unluckily after upgrading the bhyve flavor package to the latest version (202308), the Windows 10 Pro Virtual machines I have running under bhyve stopped working. They were crashing on boot with error 0xc0000225. I did blame windows for this, but I was also unable to boot the official windows 10 installation media to try recovery or a new installation. The installation media was showing a blue screen, 100% vcpu usage for while and then crashing. Googling around there were indications similar behaviour was showing up also on physical machines with buggy UEFI BIOSes. At this point I noticed the edk2 port provided UEFI firmware was recently updated. I have now reverted to the previous version (202202), grabbing the port before commit 8097dda40a03b8a27a1edf1f31a8af0455a52baf , and the windows VMs are now working fine again without need of any change. I think this regression should be investigated, at least. Maybe upstream already has a fix for this? If any further information is required please ask. Thanks in advance!
Adding Corvin as he did the last update.
Could you please share your bhyve command to start the vm?
I'm actually using vm-bhyve to start the machine. in its logs it reports: Sep 04 18:09:07: [bhyve options: -c 2,sockets=1,cores=2,threads=1 -m 3G -Hwl bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx] Sep 04 18:09:07: [bhyve devices: -s 0,hostbridge -s 31,lpc -s 4:0,ahci,hd:/dev/zvol/zroot/bhyve/W64/disk0 -s 5:0,virtio-net,tap1,mac=xx:xx:xx:xx:xx:xx -s 6:0,fbuf,tcp=127.0.0.1:5900,w=1600,h=900 -s 7:0,xhci,tablet -s 8:0,hda,play=/dev/dsp2] Sep 04 18:09:07: [bhyve console: -l com1,stdio] (UUID and MAC address redacted) I'm checking how to extract the full command line.
This is the full command line being used: bhyve -c 2,sockets=1,cores=2,threads=1 -m 3G -Hwl bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -s 0,hostbridge -s 31,lpc -s 4:0,ahci,hd:/dev/zvol/zroot/bhyve/W64/disk0 -s 5:0,virtio-net,tap1,mac=xx:xx:xx:xx:xx:xx -s 6:0,fbuf,tcp=127.0.0.1:5900,w=1600,h=900 -s 7:0,xhci,tablet -s 8:0,hda,play=/dev/dsp2 -l com1,stdio W64
I forgot, when trying to start with the Windows installation media this is being added to the command line: -s 3:0,ahci-cd,/path/to/Windows.iso,ro (path redacted for shortness)
All my FreeBSD EFI VM are failing to boot too with this latest updade. I'm using vm tool: VM template named "uefivm": loader="uefi" cpu=4 cpu_sockets=1 cpu_cores=2 cpu_threads=2 bhyve_options="-p 0:0 -p 1:1 -p 2:2 -p 3:3" memory=10G network0_type="e1000" network0_switch="public" disk0_type="ahci-hd" disk0_name="disk0.img" disk0_size="16G" sudo vm iso https://download.freebsd.org/ftp/releases/ISO-IMAGES/13.2/FreeBSD-13.2-RELEASE-amd64-disc1.iso sudo vm create -t uefivm fbsd13 sudo vm install fbsd13 FreeBSD-13.2-RELEASE-amd64-disc1.iso sudo vm console fbsd13 (etc.) ---<<BOOT>>--- Firmware Error (ACPI): A valid RSDP was not found (20201113/tbxfroot-369) panic: running without device atpic requires a local APIC
Confirmed with a FreeBSD VM running recent 15.0-CURRENT on a 13.2-RELEASE host. The guest boots fine with g202202 but encounters the following panic when restarted after upgrading edk2-bhyve to g202308: running without device atpic requires a local APIC The full command line is: bhyve -c 8 -m 16GB -Hwl bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -U c057d57e-a877-11ed-89df-3c7c3ff07902 -u -s 0,hostbridge -s 31,lpc -s 4:0,virtio-blk,/dev/zvol/zroot/vm/crash/disk0 -s 5:0,virtio-net,tap1,mac=58:9c:fc:04:d2:00 -l com1,/dev/nmdm-crash.1A
I'm not able to boot up a bhyve VM running a very recent FreeBSD-main. The VM is configured for UEFI boot and I think the problem started after the update to 202308. ---<<BOOT>>--- Firmware Error (ACPI): A valid RSDP was not found (20221020/tbxfroot-383) panic: running without device atpic requires a local APIC cpuid = 0 time = 1 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xffffffff821bbdf0 vpanic() at vpanic+0x132/frame 0xffffffff821bbf20 panic() at panic+0x43/frame 0xffffffff821bbf80 apic_init() at apic_init+0xfc/frame 0xffffffff821bbfa0 mi_startup() at mi_startup+0x19c/frame 0xffffffff821bbff0 KDB: enter: panic [ thread pid 0 tid 0 ] Stopped at kdb_enter+0x32: movq $0,0xe29fe3(%rip) db>
Same problem hit me with a Home Assistant install. Downgrading to g202202_10 fix the breakage. re: https://twitter.com/DLangille/status/1698801184310530135 Lots of start up output in this gist: (linked to from the Twitter thread) https://gist.github.com/dlangille/24f0690ee0aaa8bba86f08d1b766859b Config is: [23:11 r730-01 dvl ~] % cat /usr/local/vm/hass/hass.conf loader="uefi" cpu="4" memory="8GB" network0_type="virtio-net" network0_switch="public" #disk0_type="nvme" #disk0_type="ahci-hd" disk0_type="virtio-blk" disk0_name="disk0.img" grub_run_partition="1" grub_run_dir="/boot/grub" uuid="9aae377a-6c06-11ed-a655-002590fa0f10" network0_mac="58:9c:fc:08:5d:13"
I'm unable to reproduce the issue yet. I'm running 14.0-ALPHA4. Will retry with a 13.2 system. Looks like UEFI fails to install ACPI tables. It might be helpful to create a debug log to further investigate the issue: diff --git a/sysutils/edk2/Makefile b/sysutils/edk2/Makefile index cb6ed51d0105..32e8f63435c7 100644 --- a/sysutils/edk2/Makefile +++ b/sysutils/edk2/Makefile @@ -114,7 +114,8 @@ ONLY_FOR_ARCHS= amd64 ONLY_FOR_ARCHS_REASON= Bhyve only runs on x64 PLAT= bhyve PLAT_ARCH= X64 -PLAT_TARGET= RELEASE +PLAT_TARGET= DEBUG +PLAT_ARGS+= -D DEBUG_ON_SERIAL_PORT=TRUE PLATFILE= OvmfPkg/Bhyve/BhyveX64.dsc PLAT_RESULT= BhyveX64/${PLAT_TARGET}_GCC5/FV/BHYVE.fd PLAT_RESULT_CODE= BhyveX64/${PLAT_TARGET}_GCC5/FV/BHYVE_CODE.fd
Created attachment 244677 [details] debug log
I'm still unable to reproduce the issue on a fresh 13.2 install. I've used Win10 Pro and following bhyve call: bhyve -A -H -P -w -c 'cores=4' -m 4G -s 0:0,hostbridge -s 1:0,nvme,win10.raw -s 2:0,xhci,tablet -s '3:0,fbuf,tcp=0.0.0.0:6100' -l com1,stdio -l bootrom,/usr/local/share/edk2-bhyve/BHYVE_UEFI.fd -o 'pci.0.31.0.device=lpc' win10 From the attached log, the issue arises at: OnRootBridgesConnected: root bridges have been connected, installing ACPI tables OnRootBridgesConnected: InstallAcpiTables: Not Found
Could you please try the following OVMF patch: diff --git a/OvmfPkg/Bhyve/AcpiPlatformDxe/AcpiPlatform.c b/OvmfPkg/Bhyve/AcpiPlatformDxe/AcpiPlatform.c index fb926a8bd803..4b80c27ff00d 100644 --- a/OvmfPkg/Bhyve/AcpiPlatformDxe/AcpiPlatform.c +++ b/OvmfPkg/Bhyve/AcpiPlatformDxe/AcpiPlatform.c @@ -259,19 +259,17 @@ InstallAcpiTables ( BHYVE_BIOS_PHYSICAL_END, &Rsdp ); - if (EFI_ERROR (Status)) { - return Status; - } - - Status = InstallAcpiTablesFromRsdp ( - AcpiTable, - Rsdp - ); if (!EFI_ERROR (Status)) { - return EFI_SUCCESS; + Status = InstallAcpiTablesFromRsdp ( + AcpiTable, + Rsdp + ); + if (!EFI_ERROR (Status)) { + return EFI_SUCCESS; + } } - if (Status != EFI_NOT_FOUND) { + if (EFI_ERROR (Status)) { DEBUG ( ( DEBUG_WARN, @@ -280,7 +278,6 @@ InstallAcpiTables ( Status ) ); - return Status; } Status = InstallOvmfFvTables (AcpiTable);
It was able too boot with this patch! There is still the message: OnRootBridgesConnected: root bridges have been connected, installing ACPI tables InstallAcpiTables: unable to install bhyve's ACPI tables (Not Found)
(In reply to Corvin Köhne from comment #13) I tested the patch too and windows VMs now boot fine. I had no output before and see nothing special now, but they do work. Thanks!
(In reply to Olivier Cochard from comment #14) > It was able too boot with this patch! > > There is still the message: > > OnRootBridgesConnected: root bridges have been connected, installing ACPI tables > InstallAcpiTables: unable to install bhyve's ACPI tables (Not Found) Before 202308, OVMF installs ACPI tables which are statically included in the OVMF binary. That's enough for booting a VM. However, it's not recommended. Those ACPI tables don't match your vm configuration. As bhyve already dynamically creates ACPI tables based on the vm configuration, 202308 tries to pick them up. This message says that OVMF was unable to pick up bhyve's ACPI tables and falls back to the static ones. It's a bit strange that OVMF fails to pick up the bhyve ACPI tables. Not sure why this occurs.
I just tried the patch (edk2-bhyve-g202308_1). It does not solve the problem for me. Am I doing this wrong? [ 1.707135] serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A [ 1.750002] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x19ef99f94a1, max_idle_ns: 440795279648 ns [ 1.752341] clocksource: Switched to clocksource tsc [ 2.027877] serial8250: ttyS3 at I/O 0x2e8 (irq = 3, base_baud = 115200) is a 16550A [ 2.030246] Non-volatile memory driver v1.3 [ 2.031236] Linux agpgart interface v0.103 [ 2.039806] loop: module loaded [ 2.040674] virtio_blk virtio0: 1/0/0 default/read/poll queues [ 2.043124] virtio_blk virtio0: [vda] 67108864 512-byte logical blocks (34.4 GB/32.0 GiB)
(In reply to Dan Langille from comment #17) Looks like a different issue. Does it fail to boot with all disk types?
(In reply to Corvin Köhne from comment #18) Yes, see comment #9
I can't see from the logs why the boot fails. Would be a good idea to bisect edk2. Do you know how to bisect edk2? Or can you somehow share some instructions how to reproduce the issue?
(In reply to Corvin Köhne from comment #20) I do not know how to bisect. If instructed, I can try. The vm configuration is: loader="uefi" cpu="4" memory="8GB" network0_type="virtio-net" network0_switch="public" #disk0_type="nvme" #disk0_type="ahci-hd" disk0_type="virtio-blk" disk0_name="disk0.img" grub_run_partition="1" grub_run_dir="/boot/grub" uuid="9aae377a-6c06-11ed-a655-002590fa0f10" network0_mac="58:9c:fc:08:5d:13" The host is running a Home Assistant instance. Details on creating that instance at at https://dan.langille.org/2023/02/27/home-assistant-running-natively-on-freebsd-via-bhyve/
Hi there, Today I faced some issues with a migration of a working Redhat VM between two servers (S1 and S2). Server1 (S1): 13.2p2 with edk2-bhyve-g202202_10 Server2 (S2): 13.2p2 with edk2-bhyve-g202308 (VM will fail to boot properly). On Server S2, the VM failed to boot with multiple errors: - ACPI not found - Then dracut udev time out, etc. The only difference between the servers was EDK2-UEFI. Rolling back from edk2-bhyve-g202308 to edk2-bhyve-g202202_10 did solve the issue. I had also some other appliances, failing with different errors like, virtio-net time out and triggering kernel panics. Those also seem to be stable after the downgrade to g202202_10.
I ran across this issue with a Ubuntu VM using vm-bhyve. As a workaround, adding bhyve_options="-A" to the VM configuration is enough to get it to run.
(In reply to Sean Farley from comment #23) Thank you. Adding 'bhyve_options="-A"' lets my VM boot with edk2-bhyve-g202308_1
Oh. I totally missed that. Yeah, you need -A to trigger bhyve's ACPI table generation. As stated out earlier, the OVMF binary ships with static ACPI tables. Those tables may work somehow but most of the time they don't match your vm configuration. So, they are just wrong. I highly recommend to always use -A.
A commit in branch main references this bug: URL: https://cgit.FreeBSD.org/ports/commit/?id=d64f4b43b1d2e784c837bf38e3c2c0829e9c9f27 commit d64f4b43b1d2e784c837bf38e3c2c0829e9c9f27 Author: Corvin Köhne <corvink@FreeBSD.org> AuthorDate: 2023-09-07 08:35:35 +0000 Commit: Corvin Köhne <corvink@FreeBSD.org> CommitDate: 2023-09-08 06:53:32 +0000 OvmfPkg/Bhyve: don't exit early if RSDP is not found in memory If OVMF fails to find the RSDP in memory, it should fall back installing the statically provided ACPI tables. Signed-off-by: Corvin Köhne <corvink@FreeBSD.org> PR: 273560 Reviewed by: madpilot, manu Approved by: manu Fixes: 8097dda40a03b8a27a1edf1f31a8af0455a52baf ("sysutils/edk2: update to 202308") Sponsored by: Beckhoff Automation GmbH & Co. KG Differential Revision: https://reviews.freebsd.org/D41769 sysutils/edk2/Makefile | 1 + ...fPkg_Bhyve_AcpiPlatformDxe_AcpiPlatform.c (new) | 38 ++++++++++++++++++++++ 2 files changed, 39 insertions(+)
Corvin, should -A be the default for amd64 guests ? It says in the man page that it's required so I wonder why it's not the default.
For those of us using vm-bhyve, it correctly includes -A in its default options, but in the UEFI case it then proceeds to incorrectly replace the default options with the UEFI-specific options, instead of combining the two. See https://github.com/churchers/vm-bhyve/pull/525 for a fix.
*** Bug 273732 has been marked as a duplicate of this bug. ***
Unfortunately even with newer revision of the port I'm still getting error on 14-STABLE, switching to older version g202202_10 solves the issue. # pkg info edk2-bhyve edk2-bhyve-g202308_3 ... # bhyve -ADHw -u -c 2 -m 1G -s 0,amd_hostbridge -s 31,lpc -s 1,virtio-net,netgraph,socket=vm1-0,path=switc h1:,hook=vid1,peerhook=link10,mac=02:da:00:11:01:00 -s 3,virtio-blk,/dev/zvol/raid5-1/bhyve/docker1.disk1 -s 4,virtio-bl k,/dev/zvol/raid5-1/bhyve/docker1.disk2 -l com1,/dev/nmdm11A -l bootrom,/usr/local/share/edk2-bhyve/BHYVE_UEFI.fd docker 1 vm exit[0] reason SVM rip 0x000000003fa9cb60 inst_length 2 exitcode 0x7b exitinfo1 0x511021d exitinfo2 0x3fa9cb62 Abort
Probably the same here after recent update (1.5.0/uefi/13.2-RELEASE-p8). Stopped working after recent updates.