Bug 250617 - emulators/virtualbox-ose 5.2.44_4 and zvol fails
Summary: emulators/virtualbox-ose 5.2.44_4 and zvol fails
Status: New
Alias: None
Product: Ports & Packages
Classification: Unclassified
Component: Individual Port(s) (show other bugs)
Version: Latest
Hardware: Any Any
: --- Affects Only Me
Assignee: vbox (Nobody)
Depends on:
Reported: 2020-10-25 23:09 UTC by Adriaan de Groot
Modified: 2021-01-04 15:13 UTC (History)
3 users (show)

See Also:
bugzilla: maintainer-feedback? (vbox)


Note You need to log in before you can comment on or make changes to this bug.
Description Adriaan de Groot freebsd_committer 2020-10-25 23:09:21 UTC
I had created a bunch of VMs in VirtualBox (5.2.44_3 and before). Most of them have a zvol as backing store, and the .vmdk points to the flat volume as disk. Most of the VMs have some kind of Linux installed (Debian, Arch, or openSUSE derivatives). One has FreeBSD 13-CURRENT installed in it.

After updating to virtualbox-ose 5.2.44_4, all the Linux VMs fail to start: during startup there's a ton of I/O errors *in the guest* and after a bit VBox comes up with a dialog that there's been an error in updating the cache and the VM is paused. The FreeBSD VM still booted normally.

I booted a Linux install ISO in one of my existing VMs: that boots successfully, from the virtual CD. Then I tried to read the disk attached to the VM, with `dd`: mostly I/O errors.

Accessing the virtual disk from the host system through FUSE and e2fsprogs works: it shows the actual filesystem is not damaged and there's no physical I/O problem.

Downgrading virtualbox-ose to 4.2.44_3 restores all my Linux VMs to "working" state.
Comment 1 Gleb Popov freebsd_committer 2020-10-26 07:02:53 UTC
Sorry to hear that. If you are using SATA controller in your VM, do you have "Use Host I/O Cache" option disabled? If not, try disabling it and see if that helps.
Comment 2 Adriaan de Groot freebsd_committer 2020-10-26 12:12:26 UTC
VM Settings -> Storage -> Controller: SATA .. it has type "AHCI", port count 3, and Use Host I/O cache is not checked. Start machine, I/O errors out the wazoo and VM fails to complete boot.

Check the box: now it *does* boot.


I see now that all my SATA controllers had the option **off**, while the IDE controllers have it **on** (e.g. the CD drive). The one FreeBSD VM I mentioned turned out to be wildly different and not relevant to this PR.


Tried again with a different (Manjaro, this time) VM:

 - try to boot, it drops to an emergency root shell after a bunch of I/O errors, but the VM stays "up"; I've re-typed some of the error messages from dmesg below
 - switch on Host I/O cache for the SATA controller. It boots normally (slowly, but that's maybe because it's running fsck because of all the failed mounts / previous I/O errors)

Error messages from dmesg during failed boot:

ata1.00: failed command: READ DMA EXT
ata1.00: cmd 25/00:08:28:29:00/00:01:00:00:00/e0 tag 8 dma 135168
         res 41/10:09:28:29:00/00:01:00:00:00/e0 Emask 0x81 (invalid argument)
ata1.00: status: { DRDY ERR }
ata1.00: error: { IDNF }
ata1.00: configured for UDMA/133
sd 0:0:0:0: [sda] tag#8 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
sd 0:0:0:0: [sda] tag#8 Sense Key : Illegal Request [current]
sd 0:0:0:0: [sda] tag#8 Add. Sense: Logical block address out of range
sd 0:0:0:0: [sda] tag#8 CDB: Read(10) 28 00 00 00 29 28 00 01 08 00
blk:update_request: I/O error, dev sda, sector 10536 op 0x0:(READ) flags 0x80000 phys_seg 33 prio class 0
ata: EH complete
EXT4-fs error (device sda1): __ext4_get_inode_loc:4387: inode #8: block 1061: comm mount: unable to read itable block


So perhaps this PR can be closed with "if you use zvols and VBox and **do not** have the storage option *Use Host I/O Cache* checked, then check it and things will be ok".
Comment 3 tony01 2021-01-04 15:13:02 UTC
i have the same issue.

VM Settings -> Storage -> Controller: SATA .. 
and Use Host I/O cache is not checked. Start machine, I/O errors out the wazoo and VM fails to complete boot.

Check the box: now it *does* boot.

this is a bug and from what i understand you don't want to use host io cache for safer data although using io cache will make the VM faster.

I hope it gets fixed soon.