Bug 18754

Summary: Vinum: reviving RAID5 volume corrupts data
Product: Base System Reporter: Thomas Faehnle <tf>
Component: kernAssignee: Greg Lehey <grog>
Status: Closed FIXED    
Severity: Affects Only Me    
Priority: Normal    
Version: 4.0-STABLE   
Hardware: Any   
OS: Any   

Description Thomas Faehnle 2000-05-22 21:10:03 UTC
	Reviving a subdisk that is part of a RAID5 volume and
	simultaneously accessing said volume leads to data
	corruption.

	This occurs no matter whether the volume is accessed via a
	filesystem or via the raw /dev/vinum/<whatever> device.

Fix: 

This is no longer repeatable.  I suspect it's related to a number of
race conditions in the RAID-5 code which have since been fixed.
Please upgrade to 4-STABLE and try again.
How-To-Repeat: 	
	Given the following vinum configuration
	
	,--------------------
	| vinum -> l
	| 3 drives:
	| D d0                    State: up       Device /dev/da0s2e      
	| 					Avail: 15439/15539 MB (99%)
	| D d1                    State: up       Device /dev/da1s2e
	| 					Avail: 15439/15539 MB (99%)
	| D d2                    State: up       Device /dev/da2s2e
	| 					Avail: 15439/15539 MB (99%)
	| 1 volumes:
	| V raid                  State: up       Plexes:       1 Size:        200 MB
	| 
	| 1 plexes:
	| P raid.p0            R5 State: up       Subdisks:     3 Size:        200 MB
	| 
	| 3 subdisks:
	| S raid.p0.s0            State: up       PO:        0  B Size:        100 MB
	| S raid.p0.s1            State: up       PO:      512 kB Size:        100 MB
	| S raid.p0.s2            State: up       PO:     1024 kB Size:        100 MB
	`--------------------
	
	Create a filesystem on the vinum volume:
	
	,--------------------
	| bunsen:~# newfs -v /dev/vinum/raid 
	| /dev/vinum/raid:        409600 sectors in 100 cylinders of 1 tracks, 4096 sectors
	|         200.0MB in 7 cyl groups (16 c/g, 32.00MB/g, 7168 i/g)
	| super-block backups (for fsck -b #) at:
	|  32, 65568, 131104, 196640, 262176, 327712, 393248
	`--------------------
	
	In another shell, arrange for one subdisk to get revived (while the
	newfs above is still running):
	
	,--------------------
	| vinum -> stop -f raid.p0.s0
	| vinum -> start raid.p0.s0
	| Reviving raid.p0.s0 in the background
	| vinum[335]: reviving raid.p0.s0
	| vinum -> ls
	| S raid.p0.s0            State: R 66%    PO:        0  B Size:        100 MB
	| S raid.p0.s1            State: up       PO:      512 kB Size:        100 MB
	| S raid.p0.s2            State: up       PO:     1024 kB Size:        100 MB
	`--------------------
	
	fsck the volume:
	
	,--------------------
	| bunsen:~# fsck /dev/vinum/raid
	| ** /dev/vinum/raid
	| BAD SUPER BLOCK: VALUES IN SUPER BLOCK DISAGREE WITH THOSE IN FIRST ALTERNATE
	| /dev/vinum/raid: NOT LABELED AS A BSD FILE SYSTEM (unused)
	`--------------------
Comment 1 dan freebsd_committer freebsd_triage 2000-05-23 05:42:19 UTC
Responsible Changed
From-To: freebsd-bugs->grog

Vinum is Greg's territory. 
Comment 2 Greg Lehey freebsd_committer freebsd_triage 2001-05-08 01:11:27 UTC
State Changed
From-To: open->closed

Bug missing, presumed dead.