Bug 203820 - kernel panic when trying to unload vmm(4) and VirtualBox VMs are running
Summary: kernel panic when trying to unload vmm(4) and VirtualBox VMs are running
Status: Open
Alias: None
Product: Base System
Classification: Unclassified
Component: kern (show other bugs)
Version: 10.2-RELEASE
Hardware: amd64 Any
: --- Affects Some People
Assignee: freebsd-virtualization (Nobody)
URL:
Keywords: crash
Depends on:
Blocks:
 
Reported: 2015-10-16 14:37 UTC by martin
Modified: 2023-08-18 06:32 UTC (History)
5 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description martin 2015-10-16 14:37:14 UTC
Executing: 

# kldunload vmm 

issues kernel panic with the message: 

panic: general protection fault

Fatal trap 9: general protection fault while in kernel mode
cpuid = 2; instruction pointer  = 0x20:0xffffffff821f542b
instruction pointer     = 0x20:0xffffffff821f542b
stack pointer           = 0x28:0xfffffe082cdb6800
stack pointer           = 0x28:0xfffffe082cdb1800
frame pointer           = 0x28:0xfffffe082cdb6830
frame pointer           = 0x28:0xfffffe082cdb1830
code segment            = base rx0, limit 0xfffff, type 0x1b
code segment            = base rx0, limit 0xfffff, type 0x1b
                        = DPL 0, pres 1, long 1, def32 0, gran 1
                        = DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags        = processor eflags      = resume, resume, IOPL = 0
IOPL = 0
current process         = 11 (idle: cpu7)
current process         = 11 (idle: cpu6)
trap number             = 9
trap number             = 9
panic: general protection fault
cpuid = 7
KDB: stack backtrace:
#0 0xffffffff80996d90 at kdb_backtrace+0x60
#1 0xffffffff8095a946 at vpanic+0x126
#2 0xffffffff8095a813 at panic+0x43
#3 0xffffffff80d98b1b at trap_fatal+0x36b
#4 0xffffffff80d9879c at trap+0x75c
#5 0xffffffff80d7e832 at calltrap+0x8
#6 0xffffffff809a372c at smp_rendezvous_action+0xbc
#7 0xffffffff80d7fc49 at Xrendezvous+0x89
#8 0xffffffff8037de1a at acpi_cpu_idle+0x15a
#9 0xffffffff80d8238f at cpu_idle_acpi+0x3f
#10 0xffffffff80d82430 at cpu_idle+0x90
#11 0xffffffff809872e5 at sched_idletd+0x1d5
#12 0xffffffff809243aa at fork_exit+0x9a
#13 0xffffffff80d7ed6e at fork_trampoline+0xe
Uptime: 12h9m9s


No bhyve VM is running nor defined on the system. 

FreeBSD version: 10.2-RELEASE-p5 r289315 (svn)

Custom kernel, diff between GENERIC and mine: 

(/usr/src/sys/amd64/conf)# diff GENERIC TRIVEVERKY
22c22
< ident         GENERIC
---
> ident         TRIVEVERKY
366a367,390
>
>
> ###
> # custom add ons
>
> # AES support
> device          crypto
> device          cryptodev
> device          aesni
>
> # PF
> device                pf
> device                pflog
> device                pfsync
> options         ALTQ
> options         ALTQ_CBQ        # Class Bases Queuing (CBQ)
> options         ALTQ_RED        # Random Early Detection (RED)
> options         ALTQ_RIO        # RED In/Out
> options         ALTQ_HFSC       # Hierarchical Packet Scheduler (HFSC)
> options         ALTQ_PRIQ       # Priority Queuing (PRIQ)
> options         ALTQ_NOPCC      # Required for SMP build
>
> # CPU temp
> device          coretemp

(/usr/src/sys/amd64/conf)#

Dump is available. 
Can be reproduced with: 

# kldload vmm
# kldunload vmm


/etc/make.conf

STRIP=
CFLAGS+=-fno-omit-frame-pointer
NO_PROFILE=true
WITHOUT_X=yes
WITH_X=NO
ENABLE_GUI=NO
OPTIONS_UNSET=X11
WITH_PKGNG=yes
DEFAULT_VERSIONS+=perl5=5.20


/boot/loader.conf

zfs_load=YES
vfs.root.mountfrom="zfs:rpool/ROOT/10.2"
dtraceall_load=YES
boot_multicons="YES"
console="vidconsole,comconsole"
comconsole_speed="115200"
loader_logo="beastiebw"

Hardware: 
  motherboard: S1200BTS
  CPU:         Intel(R) Xeon(R) CPU E3-1240 V2 @ 3.40GHz
  RAM:         32GB ECC KVR16E11/8I
Comment 1 martin 2015-10-19 11:42:13 UTC
# kgdb kernel.debug /var/crash/vmcore.0
(kgdb) list *0xffffffff821f542b
0xffffffff821f542b is in vmx_disable (cpufunc.h:423).
418     }
419
420     static __inline void
421     load_cr4(u_long data)
422     {
423             __asm __volatile("movq %0,%%cr4" : : "r" (data));
424     }
425
426     static __inline u_long
427     rcr4(void)
Current language:  auto; currently minimal
(kgdb) backtrace
#0  doadump (textdump=<value optimized out>) at pcpu.h:219
#1  0xffffffff8095a5a2 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:451
#2  0xffffffff8095a985 in vpanic (fmt=<value optimized out>, ap=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:758
#3  0xffffffff8095a813 in panic (fmt=0x0) at /usr/src/sys/kern/kern_shutdown.c:687
#4  0xffffffff80d98b1b in trap_fatal (frame=<value optimized out>, eva=<value optimized out>) at /usr/src/sys/amd64/amd64/trap.c:851
#5  0xffffffff80d9879c in trap (frame=<value optimized out>) at /usr/src/sys/amd64/amd64/trap.c:203
#6  0xffffffff80d7e832 in calltrap () at /usr/src/sys/amd64/amd64/exception.S:236
#7  0xffffffff821f542b in vmx_disable (arg=0x0) at /usr/src/sys/modules/vmm/../../amd64/vmm/intel/vmx.c:481
#8  0xffffffff809a372c in smp_rendezvous_action () at /usr/src/sys/kern/subr_smp.c:439
#9  0xffffffff80d7fc49 in Xrendezvous () at apic_vector.S:295
#10 0xffffffff80d7be56 in acpi_cpu_c1 () at /usr/src/sys/amd64/acpica/acpi_machdep.c:95
#11 0xffffffff8037de1a in acpi_cpu_idle (sbt=<value optimized out>) at /usr/src/sys/dev/acpica/acpi_cpu.c:1038
#12 0xffffffff80d8238f in cpu_idle_acpi (sbt=83757098) at /usr/src/sys/amd64/amd64/machdep.c:682
#13 0xffffffff80d82430 in cpu_idle (busy=0) at /usr/src/sys/amd64/amd64/machdep.c:828
#14 0xffffffff809872e5 in sched_idletd (dummy=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:2662
#15 0xffffffff809243aa in fork_exit (callout=0xffffffff80987110 <sched_idletd>, arg=0x0, frame=0xfffffe082cdb6ac0) at /usr/src/sys/kern/kern_fork.c:1018
#16 0xffffffff80d7ed6e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:611
#17 0x0000000000000000 in ?? ()
(kgdb)
Comment 2 martin 2015-10-28 18:19:04 UTC
I neglected to mention this machine is using VirtualBox; it hosts several VMs (here all VMs are VirtualBox's VMs). 

I did following tests:

0) fresh boot without any VirtualBox kernel modules
1) fresh boot with VirtualBox modules loaded but no VM running
2) same as 1) but with at least one VM running

During kldunload kernel crashed only when VM was running. Any other time unload was successful. Also if I started the VM, suspended it and then removed the vmm everything was ok. 

Crash occurs in the same function (step) as mentioned above. I was able to reproduce it every time when VM was running.
Comment 3 martin 2015-10-29 12:07:50 UTC
Tested on different HW - same behavior.
Comment 4 Jason Unovitch freebsd_committer freebsd_triage 2015-10-31 15:21:25 UTC
Forums cross reference:  https://forums.FreeBSD.org/threads/kernel-panic-when-trying-to-unload-vmm.53697/
Comment 5 John Baldwin freebsd_committer freebsd_triage 2015-12-05 14:03:20 UTC
In general I think VM monitors assume that no other monitors are running.  I believe OS X has a kernel-level API for monitors to use to try to mitigate this, but FreeBSD does not.  For example, I fixed bhyve VMs to work across suspend and resume (of a host laptop), but that fix is specific to bhyve and does not support other VM monitors.

In general VM monitors like bhyve assume that they "own" all of the VT-x (or SVM) state.  They assume they are not sharing it.  Just loading vmm.ko will _set_ various CPU control registers (MSRs) related to VT-x (VT-x includes a host of optional features that can be enabled selectively) which might confuse some other VMM that had set these controls to different values.  (For bhyve see the sys/amd64/vmm/vmx/vmx.c vmx_init() routine run by vmm_init() in sys/amd64/vmm/vmm.c on module load.)

In summary, it is not safe to even load multiple VMMs at the same time, much less run VMs from different VMMs concurrently.  If FreeBSD does grow an API to support VM monitors the first iteration of it will probably fail attempts to load more than one VMM for this reason.
Comment 6 martin 2015-12-15 20:53:27 UTC
(In reply to John Baldwin from comment #5)
Yeah, it does make sense that only one monitor should be running. I went trough it went I was trying to figure out where the problem was, or at least what it was causing. 

Not that I understand what it does exactly, but I got the idea at least. 

For others who may experience the same issue I created a thread on forums and here I created a PR, maybe it catches somebody's attention.
Comment 7 Ofloo 2016-04-03 14:26:19 UTC
FreeBSD 10.2-RELEASE-p14 #13 r296979

I have had the same issue running kldunload vmm crashed my server and I'm running virtualbox as well.
Comment 8 John Baldwin freebsd_committer freebsd_triage 2016-04-04 17:19:12 UTC
To be clear, yes, this is not a supported configuration (having both vmm.ko and virtualbox's module loaded at the same time, much less unloading one while the other is running).  I don't expect there to be a "fix" for this anytime soon, but if there is one it will consist of making vmm.ko fail to load if virtual box is loaded and vice versa.
Comment 9 Ofloo 2016-04-04 17:21:01 UTC
What about loading virtualboxes into bhyve kernels? I haven't tried it but would that be acceptable?
Comment 10 martin 2016-04-04 18:35:42 UTC
From my point of view it was important to know why it crashed. Deeper investigation showed where and why. That was important to me. 

Kernel level API like on OS X would be appreciated feature .. but I get that priority is somewhere else.
Comment 11 John Baldwin freebsd_committer freebsd_triage 2016-04-04 22:53:27 UTC
(In reply to nospam from comment #9)
Do you mean kldloading virtual box in an instance of FreeBSD running in a bhyve VM?  I think that will just not work as bhyve doesn't let the guest see any of the VT-x capabilities of the CPU since it doesn't support nested VT-x.  The reverse might work if VirtualBox supports nested VT-x (I haven't tried).
Comment 12 Ofloo 2016-04-05 08:17:11 UTC
No I've tried the reverse because when I wanted to play with bhyve, I installed virtualbox to do so, but that didn't work at all.