I have setups where I use gvinum to divide a partition or raw disk into smaller concat devices under /dev/gvinum, that i subsequently use hast on to replicate their contents to a central zfs server for backup purposes. The hast devices get a filesystem, and each contains a single jail. I have observed this issue on a wide range of hardware and FreeBSD versions (ever since 8.0 or earlier). When there is heavy I/O on those volumes, the system that has the primary simply freezes. The secondary shows no issues. In my production environment the secondary is running on top of zfs, but I have observed this in a simple setup where both primary and secondary are on top of gvinum as well. How-To-Repeat: 1. Create a VM ============== Created a 8GB dynamically allocated disk and set up a VM with all default settings and two network interfaces: the first interface is bridged to my LAN and assigned by dhcp and the second is an internal network. Install FreeBSD 8.4 from disc1 iso. Use the entire 8GB disk to create a slice, and install these partitions: # /dev/ad0s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 1466368 0 4.2BSD 0 0 0 b: 880600 1466368 swap c: 16777089 0 unused 0 0 # "raw" part, don't edit d: 1048576 2346968 4.2BSD 0 0 0 e: 1048576 3395544 4.2BSD 0 0 0 f: 8333312 4444120 4.2BSD 0 0 0 g: 3999657 12777432 4.2BSD 0 0 0 ad0s1g will be the gvinum partition, the fstab is as follows: # Device Mountpoint FStype Options Dump Pass# /dev/ad0s1b none swap sw 0 0 /dev/ad0s1a / ufs rw 1 1 /dev/ad0s1e /tmp ufs rw 2 2 /dev/ad0s1f /usr ufs rw 2 2 /dev/ad0s1d /var ufs rw 2 2 /dev/acd0 /cdrom cd9660 ro,noauto 0 0 The /etc/rc.conf of this vm is: # Created: Fri May 9 20:21:48 2014 # Enable network daemons for user convenience. # Please make all changes to this file, not to /etc/defaults/rc.conf. # This file now contains just the overrides from /etc/defaults/rc.conf. hostname="hosta" ifconfig_em0="dhcp" ifconfig_em1="inet 192.168.13.1 netmask 255.255.255.0" The /etc/hosts is: ::1 localhost localhost.my.domain 127.0.0.1 localhost localhost.my.domain 192.168.13.1 hosta 192.168.13.2 hostb I then pulled the source from svn and compiled the kernel and world: # cd /usr # svn checkout https://svn.eu0.freebsd.org/base/release/8.4.0 src # cd src # make buildkernel && make installkernel # make installkernel # mergemaster -p -iFU # make installworld # mergemaster -iFU # reboot Then set up gvinum with 'gvinum create' and the following config: drive store device /dev/ad0s1g volume demo plex name demo.p0 org concat vol demo sd name demo.p0.s0 drive store len 2097152s driveoffset 265s plex demo.p0 plexoffset 0s Finally, install bonnie (pkg_add -r bonnie). 2. Clone the VM =============== Cloned the VM to create hostb, with rc.conf modified as such: # Created: Fri May 9 20:21:48 2014 # Enable network daemons for user convenience. # Please make all changes to this file, not to /etc/defaults/rc.conf. # This file now contains just the overrides from /etc/defaults/rc.conf. hostname="hostb" ifconfig_em0="dhcp" ifconfig_em1="inet 192.168.13.2 netmask 255.255.255.0" 3. Configure hastd ================== Created /etc/hast.conf on both vm's as follows; resource demo { on hosta { remote 192.168.13.2 local /dev/gvinum/demo } on hostb { remote 192.168.13.1 local /dev/gvinum/demo } } Start & init hast on hosta: root@hosta# /etc/rc.d/hastd onestart root@hosta# hastctl create demo root@hosta# hastctl role primary demo root@hosta# newfs -U /dev/hast/demo and on hostb: root@hostb# /etc/rc.d/hastd onestart root@hostb# hastctl create demo root@hostb# hastctl role secondary demo 4. Mount and start bonnie ========================= root@hosta# mkdir /mnt/demo root@hosta# mount /dev/hast/demo /mnt/demo root@hosta# cd /mnt/demo root@hosta# bonnie & bonnie & bonnie & bonnie & Wait till the system freezes. I normally have gstat running on another pty, but that shouldn't make a difference.
batch change: For bugs that match the following - Status Is In progress AND - Untouched since 2018-01-01. AND - Affects Base System OR Documentation DO: Reset to open status. Note: I did a quick pass but if you are getting this email it might be worthwhile to double check to see if this bug ought to be closed.
As the original reporter, I do not know whether this bug still applies. I do not run the described setup anymore, and as such the bug is not relevant for me anymore. Whether that means this bug can be closed, i can not judge. If I am the only one who ever ran into this bug or had any interest in this bug, it might as well be closed (although the crash behaviour may still be present).
This bug was likely fixed in the pass over GEOM fixing inconsistencies that lead to crashes of this type.