Bug 273732 - 13.2-RELEASE-p3 Linux VMs stopped working
Summary: 13.2-RELEASE-p3 Linux VMs stopped working
Status: Closed DUPLICATE of bug 273560
Alias: None
Product: Base System
Classification: Unclassified
Component: bhyve (show other bugs)
Version: 13.2-RELEASE
Hardware: amd64 Any
: --- Affects Only Me
Assignee: freebsd-virtualization (Nobody)
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2023-09-11 23:22 UTC by courtney.hicks1
Modified: 2023-09-12 09:37 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description courtney.hicks1 2023-09-11 23:22:09 UTC
I just updated from FreeBSD 13.2-RELEASE-p2 to FreeBSD 13.2-RELEASE-p3 and suddenly my Linux virtual machines would not boot properly and panic. They seemed to get stuck just after loading PS/2 devices. I wish I had more detail but I don't see anything in the logs. They're Ubuntu 20.04 and Ubuntu 22.04 VMs started with vm-bhyve.
Comment 1 courtney.hicks1 2023-09-11 23:44:54 UTC
Got a trace from Ubuntu 20.04

[  242.706918] INFO: task systemd-udevd:143 blocked for more than 120 seconds.
[  242.707824]       Not tainted 5.4.0-162-generic #179-Ubuntu
[  242.708547] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  242.709539] systemd-udevd   D    0   143    141 0x80004004
[  242.710245] Call Trace:
[  242.710577]  __schedule+0x2e3/0x740
[  242.710902]  schedule+0x42/0xb0
[  242.711313]  io_schedule+0x16/0x40
[  242.711763]  do_read_cache_page+0x438/0x840
[  242.712306]  ? file_fdatawait_range+0x30/0x30
[  242.712868]  read_cache_page+0x12/0x20
[  242.713353]  read_dev_sector+0x27/0xd0
[  242.713841]  read_lba+0xbd/0x220
[  242.714267]  ? kmem_cache_alloc_trace+0x1b0/0x240
[  242.714905]  efi_partition+0x1e0/0x700
[  242.715401]  ? vsnprintf+0x39e/0x4e0
[  242.715871]  ? snprintf+0x49/0x60
[  242.716306]  check_partition+0x154/0x250
[  242.716818]  rescan_partitions+0xae/0x280
[  242.717342]  bdev_disk_changed+0x5f/0x70
[  242.717853]  __blkdev_get+0x3e3/0x580
[  242.718335]  blkdev_get+0x3d/0x150
[  242.718781]  __device_add_disk+0x329/0x480
[  242.719434]  device_add_disk+0x13/0x20
[  242.719930]  virtblk_probe+0x4b5/0x847 [virtio_blk]
[  242.720561]  virtio_dev_probe+0x195/0x230
[  242.721083]  really_probe+0x159/0x3d0
[  242.721565]  driver_probe_device+0xbc/0x100
[  242.722109]  device_driver_attach+0x5d/0x70
[  242.722677]  __driver_attach+0xa4/0x140
[  242.722903]  ? device_driver_attach+0x70/0x70
[  242.723468]  bus_for_each_dev+0x7e/0xc0
[  242.723968]  driver_attach+0x1e/0x20
[  242.724435]  bus_add_driver+0x161/0x200
[  242.724935]  driver_register+0x74/0xd0
[  242.725425]  register_virtio_driver+0x20/0x30
[  242.725994]  init+0x54/0x1000 [virtio_blk]
[  242.726532]  ? 0xffffffffc0342000
[  242.726902]  do_one_initcall+0x4a/0x200
[  242.727404]  ? _cond_resched+0x19/0x30
[  242.727895]  ? kmem_cache_alloc_trace+0x1b0/0x240
[  242.728504]  do_init_module+0x52/0x240
[  242.728989]  load_module+0x128d/0x13d0
[  242.729479]  __do_sys_finit_module+0xbe/0x120
[  242.730040]  ? __do_sys_finit_module+0xbe/0x120
[  242.730902]  __x64_sys_finit_module+0x1a/0x20
[  242.731469]  do_syscall_64+0x57/0x190
[  242.731950]  entry_SYSCALL_64_after_hwframe+0x5c/0xc1
[  242.732600] RIP: 0033:0x7f5b011ac73d
[  242.733066] Code: Bad RIP value.
[  242.733489] RSP: 002b:00007ffc02631488 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[  242.734457] RAX: ffffffffffffffda RBX: 0000561faf5567e0 RCX: 00007f5b011ac73d
[  242.734901] RDX: 0000000000000000 RSI: 00007f5b0108cded RDI: 0000000000000005
[  242.735810] RBP: 0000000000020000 R08: 0000000000000000 R09: 0000561faf535e80
[  242.736721] R10: 0000000000000005 R11: 0000000000000246 R12: 00007f5b0108cded
[  242.737633] R13: 0000000000000000 R14: 0000561faf5513d0 R15: 0000561faf5567e0



vm-bhyve configuration file

loader="uefi"
cpu=2
memory=2048M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
grub_run_partition="2"
disk1_name="disk1"
disk1_type="virtio-blk"
disk1_dev="sparse-zvol"
uuid="2920ce51-045a-4fa6-8850-c14634fb0bd3"


I have a Devuan 4 virtual machine that is stuck at "Unable to enable ACPI"
Comment 2 Corvin Köhne freebsd_committer freebsd_triage 2023-09-12 06:01:49 UTC
Looks like a duplicate of https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=273560

Please make sure to boot bhyve with the -A option.
Comment 3 courtney.hicks1 2023-09-12 06:13:14 UTC
Thanks! Looks correct.

The solution for me was to apply this patch to /usr/local/lib/vm-bhyve/vm-run

https://github.com/churchers/vm-bhyve/pull/525/commits
Comment 4 Corvin Köhne freebsd_committer freebsd_triage 2023-09-12 09:37:26 UTC

*** This bug has been marked as a duplicate of bug 273560 ***