Not sure if this a FreeBSD bug or a vboxdrv bug, but I'm reporting it here anyway.
When I have a VM running under Virtualbox, then I go S3, on resume the system starts okay, but just before switching back to X from tty, vboxdrv causes a trap error, and the system drops the following messages:
Fatal trap 1: privileged instruction fault while In kernel mode
cpuid = 3: apic id = 03
instruction pointer = 0x20:13xffffffff8317a114
stack pointer = 0x28:Oxfffffe009708b430
frame pointer = 0x28:Oxfffffe009708b450
code segment = base rx0, limit Oxfffff, type Ox1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 48053 (VirtualBox)
[ thread pid 40053 tid 181884 ]
Stopped at __stop_set_sgsuninit_set+0xd4c4: vmptrld (%rsp)
Tracing pid 40053 tid 101804 td Oxfffff8001f317580
__stop_set_sysuninit_set0 at __stop_set_sysuninit_set+8xd4c4/frame Oxfffffe0097813b458
__stop_set_sysininit_set() at __stop_set_sysuninit_set+Oxcdbd/frame OxffiffeE189708b480
__stop_set_sysminit_set() at Oxffffffff831935f4/frame Oxfffffe009708b500
supdrvIOCtIFast() at supdrvIOCtlFast+Ox9c/frame Oxfffffe009708b520
VBoxDrvFreeBSDCtl() at VBoxDrvFreeBSDCtl+0x4e/frame Oxfffffe009708b590
devfs_ioctl() at devfs_ioctl+Oxad/frame Oxfffffe009708b5e0
VOP_IOCTL__APV() at VOP_IOCTL_APV+0x82/frame Oxfffffe0097086610
vn_ioctl() at vn_ioctl+Oxla4/frame Oxfffffe009708b720
devfs_ioctl_f() at devfs_ioctl_f+Ox1f/frame Oxfffffe009708b740
kern_ioctl() at kernjoct1+0x26d/frame Oxfffffe009708b7b0
sys_ioctl() at sys_ioctl+8x15a/frame Oxfffffe009708b880
amd64_syscall() at amd64_syscall+0x369/frame Oxfffffe009708b9b0
fast_syscall_common() at fast_syscall_common+8x101/frame 0xfffffe0097813b9b0
syscall (54, FreeBSD ELF64, sys_ioctl), rip = Ox800571eaa, rsp = Bx7fffdeaeee38. rbp = 0x7fffdeaee..
FreeBSD doesn't have a way to let external hypervisors like vbox work across suspend and resume. I did add a hook for bhyve in https://svnweb.freebsd.org/base?view=revision&revision=259782. We would need something similar. The same issue matters for permitting multiple hypervisors being active at the same time (e.g. you can't run both bhyve and vbox at the same time currently). I had been thinking of adding a kind of hypervisor framework to let hypervisors allocate the VMX region and then permit associating it with a given process so that you could do the right vmxon/vmxoff during context switch. Having that would also allow us to more cleanly handle suspend/resume for arbitrary hypervisors.
One thing you might be able to do for now is change the vbox driver to set the same vmm_resume_p pointer that bhyve's vmm.ko sets during MOD_LOAD to a function that reinvokes vmxon with the right address on each CPU during resume. Probably both bhyve and vbox should also fail to load in MOD_LOAD if that pointer is already non-NULL which would enforce only one could be used at a time.