Bug 166589

Summary: atacontrol(8) incorrectly treats RAID10 and 0+1 the same
Product: Base System Reporter: landsidel.allen
Component: binAssignee: freebsd-bugs (Nobody) <bugs>
Status: Closed Overcome By Events    
Severity: Affects Only Me CC: ngie
Priority: Normal    
Version: Unspecified   
Hardware: Any   
OS: Any   

Description landsidel.allen 2012-04-02 20:20:02 UTC
Reference: http://www.freebsd.org/cgi/cvsweb.cgi/src/sbin/atacontrol/atacontrol.c?annotate=1.36.2.3

atacontrol.c commit on 25-Jan-2006, lines 413-427.

Code was added to allow creation of RAID0+1 on ATA controllers available at the time.  This code is checking for the user supplied strings "RAID0+1" and "RAID10" and treating them both the same.

As a result, RAID10 arrays cannot be created on devices that support both levels -- unknown if there are any such devices supported by atacontrol.

RAID10 and RAID0+1 are not the same thing.  The comment in the commit implies that the author thinks they are.

Fix: 

1. On controllers only supporting one of the two levels (0+1 or 10), display and expect the correct string.

2. Separate the two so that, if there are any controllers that do support both, the correct one can be chosen.

ata_ioc_raid_config in sys/sys/ata.h will also need modified to support both AR_RAID01 and AR_RAID10.
Comment 1 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 02:46:02 UTC
There could be variants in terminology, but in fact for most of users
they are the same. If you have opinion why they should be treated
differently, please explain it.

-- 
Alexander Motin
Comment 2 landsidel.allen 2013-01-15 02:51:35 UTC
They are not variants in terminology, they are different raid levels.  
Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the 
disks fails, that entire RAID-0 is offline and must be rebuilt, and all 
redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined 
into a RAID-0.  If one disk fails, only that particular RAID-1 is 
degraded, and the redundancy of the others is maintained.

0+1 cannot survive two failed disks no matter how many are in the 
array.  10 can survive half the disks failing, if it's the right half.

This is something people who've never used more than 4 disks fail to 
grasp, but those of us with 6 (or many many more) know very well.

On 1/14/2013 21:46, Alexander Motin wrote:
> There could be variants in terminology, but in fact for most of users
> they are the same. If you have opinion why they should be treated
> differently, please explain it.
>
Comment 3 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 08:12:14 UTC
That is clear and I had guess you mean it, but why do you insist that
such RAID0+1 variant should even exist if it has no benefits over
RAID10, and why it should be explicitly available to user?

On 15.01.2013 04:51, Allen Landsidel wrote:
> They are not variants in terminology, they are different raid levels. 
> Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the
> disks fails, that entire RAID-0 is offline and must be rebuilt, and all
> redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined
> into a RAID-0.  If one disk fails, only that particular RAID-1 is
> degraded, and the redundancy of the others is maintained.
> 
> 0+1 cannot survive two failed disks no matter how many are in the
> array.  10 can survive half the disks failing, if it's the right half.
> 
> This is something people who've never used more than 4 disks fail to
> grasp, but those of us with 6 (or many many more) know very well.
> 
> On 1/14/2013 21:46, Alexander Motin wrote:
>> There could be variants in terminology, but in fact for most of users
>> they are the same. If you have opinion why they should be treated
>> differently, please explain it.

-- 
Alexander Motin
Comment 4 landsidel.allen 2013-01-15 15:28:53 UTC
Most devices typically only support one level or the other, but not 
both.  I don't "Insist that it should exist", it *does* exist.  Both 
levels do, and they are not the same thing.

As for why it should be "available" to the user, I think that's a pretty 
silly question.  If their hardware supports one or both levels, they 
should be available to the user -- and called by their correct names.



On 1/15/2013 03:12, Alexander Motin wrote:
> That is clear and I had guess you mean it, but why do you insist that
> such RAID0+1 variant should even exist if it has no benefits over
> RAID10, and why it should be explicitly available to user?
>
> On 15.01.2013 04:51, Allen Landsidel wrote:
>> They are not variants in terminology, they are different raid levels.
>> Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the
>> disks fails, that entire RAID-0 is offline and must be rebuilt, and all
>> redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined
>> into a RAID-0.  If one disk fails, only that particular RAID-1 is
>> degraded, and the redundancy of the others is maintained.
>>
>> 0+1 cannot survive two failed disks no matter how many are in the
>> array.  10 can survive half the disks failing, if it's the right half.
>>
>> This is something people who've never used more than 4 disks fail to
>> grasp, but those of us with 6 (or many many more) know very well.
>>
>> On 1/14/2013 21:46, Alexander Motin wrote:
>>> There could be variants in terminology, but in fact for most of users
>>> they are the same. If you have opinion why they should be treated
>>> differently, please explain it.
Comment 5 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 15:55:57 UTC
Their on-disk formats are identical. Even if RAID BIOS supports RAID0+1,
there is no problem to handle it as RAID10 at the OS level. That gives
better reliability without any downsides. I think there is much higher
chance that inexperienced user will choose RAID0+1 by mistake, then
experienced wish do to it on intentionally. Do you know any reason why
RAID0+1 can't be handled as RAID10?

On 15.01.2013 17:28, Allen Landsidel wrote:
> Most devices typically only support one level or the other, but not
> both.  I don't "Insist that it should exist", it *does* exist.  Both
> levels do, and they are not the same thing.
> 
> As for why it should be "available" to the user, I think that's a pretty
> silly question.  If their hardware supports one or both levels, they
> should be available to the user -- and called by their correct names.
> 
> On 1/15/2013 03:12, Alexander Motin wrote:
>> That is clear and I had guess you mean it, but why do you insist that
>> such RAID0+1 variant should even exist if it has no benefits over
>> RAID10, and why it should be explicitly available to user?
>>
>> On 15.01.2013 04:51, Allen Landsidel wrote:
>>> They are not variants in terminology, they are different raid levels.
>>> Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the
>>> disks fails, that entire RAID-0 is offline and must be rebuilt, and all
>>> redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined
>>> into a RAID-0.  If one disk fails, only that particular RAID-1 is
>>> degraded, and the redundancy of the others is maintained.
>>>
>>> 0+1 cannot survive two failed disks no matter how many are in the
>>> array.  10 can survive half the disks failing, if it's the right half.
>>>
>>> This is something people who've never used more than 4 disks fail to
>>> grasp, but those of us with 6 (or many many more) know very well.
>>>
>>> On 1/14/2013 21:46, Alexander Motin wrote:
>>>> There could be variants in terminology, but in fact for most of users
>>>> they are the same. If you have opinion why they should be treated
>>>> differently, please explain it.
> 


-- 
Alexander Motin
Comment 6 landsidel.allen 2013-01-15 16:00:22 UTC
I don't know of a single RAID controller that supports both levels.  I 
know of many that support 0+1 and many that support 10.

If the controller supports 10 and you call it 0+1 in the software, the 
user is being lied to, and may incorrectly think their controller or 
FreeBSD does not support RAID-10.

If the controller supports 0+1 and you call it 10 in the software, the 
user is being lied to, and my incorrectly think their data is more 
protected than it really is.

Why so much pushback?  Since when did FreeBSD start trying to make users 
decisions for them, rather than simply allowing them to choose for 
themselves amongst the options their hardware supports, anyway?

A better question:  Can you name one good reason why the RAID level, 
whatever it is, should be misrepresented to the user?

On 1/15/2013 10:55, Alexander Motin wrote:
> Their on-disk formats are identical. Even if RAID BIOS supports RAID0+1,
> there is no problem to handle it as RAID10 at the OS level. That gives
> better reliability without any downsides. I think there is much higher
> chance that inexperienced user will choose RAID0+1 by mistake, then
> experienced wish do to it on intentionally. Do you know any reason why
> RAID0+1 can't be handled as RAID10?
>
> On 15.01.2013 17:28, Allen Landsidel wrote:
>> Most devices typically only support one level or the other, but not
>> both.  I don't "Insist that it should exist", it *does* exist.  Both
>> levels do, and they are not the same thing.
>>
>> As for why it should be "available" to the user, I think that's a pretty
>> silly question.  If their hardware supports one or both levels, they
>> should be available to the user -- and called by their correct names.
>>
>> On 1/15/2013 03:12, Alexander Motin wrote:
>>> That is clear and I had guess you mean it, but why do you insist that
>>> such RAID0+1 variant should even exist if it has no benefits over
>>> RAID10, and why it should be explicitly available to user?
>>>
>>> On 15.01.2013 04:51, Allen Landsidel wrote:
>>>> They are not variants in terminology, they are different raid levels.
>>>> Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the
>>>> disks fails, that entire RAID-0 is offline and must be rebuilt, and all
>>>> redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined
>>>> into a RAID-0.  If one disk fails, only that particular RAID-1 is
>>>> degraded, and the redundancy of the others is maintained.
>>>>
>>>> 0+1 cannot survive two failed disks no matter how many are in the
>>>> array.  10 can survive half the disks failing, if it's the right half.
>>>>
>>>> This is something people who've never used more than 4 disks fail to
>>>> grasp, but those of us with 6 (or many many more) know very well.
>>>>
>>>> On 1/14/2013 21:46, Alexander Motin wrote:
>>>>> There could be variants in terminology, but in fact for most of users
>>>>> they are the same. If you have opinion why they should be treated
>>>>> differently, please explain it.
>
Comment 7 landsidel.allen 2013-01-15 16:03:16 UTC
I'm also extremely interested to hear how you intend to "handle it as 
RAID10 at the OS level" since that is, in fact, impossible.

If it's a RAID0+1 in the controller, than it's a RAID0+1. Period.  The 
OS can't do anything about it.  A single disk failure is still knocking 
half the array offline (the entire failed RAID-0) and you are left with 
a functioning RAID-0 with no redundancy at all.

On 1/15/2013 10:55, Alexander Motin wrote:
> Their on-disk formats are identical. Even if RAID BIOS supports RAID0+1,
> there is no problem to handle it as RAID10 at the OS level. That gives
> better reliability without any downsides. I think there is much higher
> chance that inexperienced user will choose RAID0+1 by mistake, then
> experienced wish do to it on intentionally. Do you know any reason why
> RAID0+1 can't be handled as RAID10?
>
> On 15.01.2013 17:28, Allen Landsidel wrote:
>> Most devices typically only support one level or the other, but not
>> both.  I don't "Insist that it should exist", it *does* exist.  Both
>> levels do, and they are not the same thing.
>>
>> As for why it should be "available" to the user, I think that's a pretty
>> silly question.  If their hardware supports one or both levels, they
>> should be available to the user -- and called by their correct names.
>>
>> On 1/15/2013 03:12, Alexander Motin wrote:
>>> That is clear and I had guess you mean it, but why do you insist that
>>> such RAID0+1 variant should even exist if it has no benefits over
>>> RAID10, and why it should be explicitly available to user?
>>>
>>> On 15.01.2013 04:51, Allen Landsidel wrote:
>>>> They are not variants in terminology, they are different raid levels.
>>>> Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the
>>>> disks fails, that entire RAID-0 is offline and must be rebuilt, and all
>>>> redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined
>>>> into a RAID-0.  If one disk fails, only that particular RAID-1 is
>>>> degraded, and the redundancy of the others is maintained.
>>>>
>>>> 0+1 cannot survive two failed disks no matter how many are in the
>>>> array.  10 can survive half the disks failing, if it's the right half.
>>>>
>>>> This is something people who've never used more than 4 disks fail to
>>>> grasp, but those of us with 6 (or many many more) know very well.
>>>>
>>>> On 1/14/2013 21:46, Alexander Motin wrote:
>>>>> There could be variants in terminology, but in fact for most of users
>>>>> they are the same. If you have opinion why they should be treated
>>>>> differently, please explain it.
>
Comment 8 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 16:20:56 UTC
On 15.01.2013 18:03, Allen Landsidel wrote:
> I'm also extremely interested to hear how you intend to "handle it as
> RAID10 at the OS level" since that is, in fact, impossible.

Easily!

> If it's a RAID0+1 in the controller, than it's a RAID0+1. Period.  The
> OS can't do anything about it.  A single disk failure is still knocking
> half the array offline (the entire failed RAID-0) and you are left with
> a functioning RAID-0 with no redundancy at all.

ataraid(8) in question (and its new alternative graid(8)) controls
software RAIDs. It means that I can do anything I want in software as
long as it fits into existing on-disk metadata format. If RAID BIOS
wants to believe that two failed disks of four always mean failed array
-- it is their decision I can't change. But after OS booted nothing will
prevent me from accessing still available data replicas.

> On 1/15/2013 10:55, Alexander Motin wrote:
>> Their on-disk formats are identical. Even if RAID BIOS supports RAID0+1,
>> there is no problem to handle it as RAID10 at the OS level. That gives
>> better reliability without any downsides. I think there is much higher
>> chance that inexperienced user will choose RAID0+1 by mistake, then
>> experienced wish do to it on intentionally. Do you know any reason why
>> RAID0+1 can't be handled as RAID10?
>>
>> On 15.01.2013 17:28, Allen Landsidel wrote:
>>> Most devices typically only support one level or the other, but not
>>> both.  I don't "Insist that it should exist", it *does* exist.  Both
>>> levels do, and they are not the same thing.
>>>
>>> As for why it should be "available" to the user, I think that's a pretty
>>> silly question.  If their hardware supports one or both levels, they
>>> should be available to the user -- and called by their correct names.
>>>
>>> On 1/15/2013 03:12, Alexander Motin wrote:
>>>> That is clear and I had guess you mean it, but why do you insist that
>>>> such RAID0+1 variant should even exist if it has no benefits over
>>>> RAID10, and why it should be explicitly available to user?
>>>>
>>>> On 15.01.2013 04:51, Allen Landsidel wrote:
>>>>> They are not variants in terminology, they are different raid levels.
>>>>> Raid0+1 is two RAID-0 arrays, mirrored into a RAID-1.  if one of the
>>>>> disks fails, that entire RAID-0 is offline and must be rebuilt, and
>>>>> all
>>>>> redundancy is lost.  A RAID-10 is composed of N raid-1 disks combined
>>>>> into a RAID-0.  If one disk fails, only that particular RAID-1 is
>>>>> degraded, and the redundancy of the others is maintained.
>>>>>
>>>>> 0+1 cannot survive two failed disks no matter how many are in the
>>>>> array.  10 can survive half the disks failing, if it's the right half.
>>>>>
>>>>> This is something people who've never used more than 4 disks fail to
>>>>> grasp, but those of us with 6 (or many many more) know very well.
>>>>>
>>>>> On 1/14/2013 21:46, Alexander Motin wrote:
>>>>>> There could be variants in terminology, but in fact for most of users
>>>>>> they are the same. If you have opinion why they should be treated
>>>>>> differently, please explain it.
>>
> 


-- 
Alexander Motin
Comment 9 landsidel.allen 2013-01-15 16:22:00 UTC
Your solution then is to require everyone use software raid on their 
hardware raid controllers?

On 1/15/2013 11:20, Alexander Motin wrote:
> On 15.01.2013 18:03, Allen Landsidel wrote:
>> I'm also extremely interested to hear how you intend to "handle it as
>> RAID10 at the OS level" since that is, in fact, impossible.
> Easily!
>
>> If it's a RAID0+1 in the controller, than it's a RAID0+1. Period.  The
>> OS can't do anything about it.  A single disk failure is still knocking
>> half the array offline (the entire failed RAID-0) and you are left with
>> a functioning RAID-0 with no redundancy at all.
> ataraid(8) in question (and its new alternative graid(8)) controls
> software RAIDs. It means that I can do anything I want in software as
> long as it fits into existing on-disk metadata format. If RAID BIOS
> wants to believe that two failed disks of four always mean failed array
> -- it is their decision I can't change. But after OS booted nothing will
> prevent me from accessing still available data replicas.
>
>> On
Comment 10 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 16:25:14 UTC
At what point have we talked about hardware RAID controllers? ataraid(8)
never controller hardware RAID controllers, but only Soft-/Fake-RAIDs
implemented by board BIOS'es during boot and OS drivers after that.

On 15.01.2013 18:22, Allen Landsidel wrote:
> Your solution then is to require everyone use software raid on their
> hardware raid controllers?
> 
> On 1/15/2013 11:20, Alexander Motin wrote:
>> On 15.01.2013 18:03, Allen Landsidel wrote:
>>> I'm also extremely interested to hear how you intend to "handle it as
>>> RAID10 at the OS level" since that is, in fact, impossible.
>> Easily!
>>
>>> If it's a RAID0+1 in the controller, than it's a RAID0+1. Period.  The
>>> OS can't do anything about it.  A single disk failure is still knocking
>>> half the array offline (the entire failed RAID-0) and you are left with
>>> a functioning RAID-0 with no redundancy at all.
>> ataraid(8) in question (and its new alternative graid(8)) controls
>> software RAIDs. It means that I can do anything I want in software as
>> long as it fits into existing on-disk metadata format. If RAID BIOS
>> wants to believe that two failed disks of four always mean failed array
>> -- it is their decision I can't change. But after OS booted nothing will
>> prevent me from accessing still available data replicas.
>>
>>> On
> 


-- 
Alexander Motin
Comment 11 landsidel.allen 2013-01-15 16:26:43 UTC
Holy crap.

The PR is about hardware raid controllers and their interface with 
atacontrol, not ataraid.

On 1/15/2013 11:25, Alexander Motin wrote:
> At what point have we talked about hardware RAID controllers? ataraid(8)
> never controller hardware RAID controllers, but only Soft-/Fake-RAIDs
> implemented by board BIOS'es during boot and OS drivers after that.
>
> On 15.01.2013 18:22, Allen Landsidel wrote:
>> Your solution then is to require everyone use software raid on their
>> hardware raid controllers?
>>
>> On 1/15/2013 11:20, Alexander Motin wrote:
>>> On 15.01.2013 18:03, Allen Landsidel wrote:
>>>> I'm also extremely interested to hear how you intend to "handle it as
>>>> RAID10 at the OS level" since that is, in fact, impossible.
>>> Easily!
>>>
>>>> If it's a RAID0+1 in the controller, than it's a RAID0+1. Period.  The
>>>> OS can't do anything about it.  A single disk failure is still knocking
>>>> half the array offline (the entire failed RAID-0) and you are left with
>>>> a functioning RAID-0 with no redundancy at all.
>>> ataraid(8) in question (and its new alternative graid(8)) controls
>>> software RAIDs. It means that I can do anything I want in software as
>>> long as it fits into existing on-disk metadata format. If RAID BIOS
>>> wants to believe that two failed disks of four always mean failed array
>>> -- it is their decision I can't change. But after OS booted nothing will
>>> prevent me from accessing still available data replicas.
>>>
>>>> On
>
Comment 12 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 16:35:43 UTC
Please, be my guest to show me where atacontrol(8) controls any hardware
RAID controller, or anything except ataraid(4) at all.

On 15.01.2013 18:26, Allen Landsidel wrote:
> The PR is about hardware raid controllers and their interface with
> atacontrol, not ataraid.
> 
> On 1/15/2013 11:25, Alexander Motin wrote:
>> At what point have we talked about hardware RAID controllers? ataraid(8)
>> never controller hardware RAID controllers, but only Soft-/Fake-RAIDs
>> implemented by board BIOS'es during boot and OS drivers after that.
>>
>> On 15.01.2013 18:22, Allen Landsidel wrote:
>>> Your solution then is to require everyone use software raid on their
>>> hardware raid controllers?
>>>
>>> On 1/15/2013 11:20, Alexander Motin wrote:
>>>> On 15.01.2013 18:03, Allen Landsidel wrote:
>>>>> I'm also extremely interested to hear how you intend to "handle it as
>>>>> RAID10 at the OS level" since that is, in fact, impossible.
>>>> Easily!
>>>>
>>>>> If it's a RAID0+1 in the controller, than it's a RAID0+1. Period.  The
>>>>> OS can't do anything about it.  A single disk failure is still
>>>>> knocking
>>>>> half the array offline (the entire failed RAID-0) and you are left
>>>>> with
>>>>> a functioning RAID-0 with no redundancy at all.
>>>> ataraid(8) in question (and its new alternative graid(8)) controls
>>>> software RAIDs. It means that I can do anything I want in software as
>>>> long as it fits into existing on-disk metadata format. If RAID BIOS
>>>> wants to believe that two failed disks of four always mean failed array
>>>> -- it is their decision I can't change. But after OS booted nothing
>>>> will
>>>> prevent me from accessing still available data replicas.
>>>>
>>>>> On
>>
> 


-- 
Alexander Motin
Comment 13 landsidel.allen 2013-01-15 17:09:29 UTC
The atacontrol(8) man page and handbook page on RAID (19.4.2) both 
discuss (briefly) hardware RAID and say it is supported.

It seems you're calling all the southbridge controllers "software" 
RAID?  That terminology in my experience is used to describe gmirror/ccd 
disks without a RAID controller or RAID BIOS.

In any case, the difference and PR still remain.

A 6 disk RAID-10 controller ((1,2),(3,4),(5,6)) with failed disks 1, & 4 
(or even 1,3 & 5) will boot and allow you to do your 'magic.'

A 6 disk RAID0+1 controller ((1,2,3),(4,5,6)) with failed disks 1 & 4 
will not boot the OS.

Misrepresenting one as the other in the software is wrong.


On 1/15/2013 11:35, Alexander Motin wrote:
> Please, be my guest to show me where atacontrol(8) controls any hardware
> RAID controller, or anything except ataraid(4) at all.
>
>
Comment 14 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 18:10:56 UTC
On 15.01.2013 19:09, Allen Landsidel wrote:
> The atacontrol(8) man page and handbook page on RAID (19.4.2) both
> discuss (briefly) hardware RAID and say it is supported.
> 
> It seems you're calling all the southbridge controllers "software"
> RAID?  That terminology in my experience is used to describe gmirror/ccd
> disks without a RAID controller or RAID BIOS.

Some people call southbridge RAID as "FakeRAID", as middle point between
hardware and purely software. I just don't very like that work. From
ataraid/graid perspective all southbridge RAID "functions" are just some
metadata format specification, that, if followed, will allow BIOS to
boot system from the array. There is no any real hardware acceleration
in southbridge RAIDs. There are indeed some recent SATA chips from
Marvell and some others that really implement some RAID levels in
hardware, but they have nothing to do with atacontrol and their volumes
look to the system as usual disk. I haven't even seen documentation for
their control interfaces to support that.

> In any case, the difference and PR still remain.
> 
> A 6 disk RAID-10 controller ((1,2),(3,4),(5,6)) with failed disks 1, & 4
> (or even 1,3 & 5) will boot and allow you to do your 'magic.'
> 
> A 6 disk RAID0+1 controller ((1,2,3),(4,5,6)) with failed disks 1 & 4
> will not boot the OS.
> 
> Misrepresenting one as the other in the software is wrong.

You may have some point from the boot side, but do you have reliable
information about which controllers support RAID0+1 and which RAID10?
There is often much more marketing and traditions in public papers then
real technical data.  Also, if user got single failure in RAID10, it
should not feel much more comfortable then if it would be RAID0+1, as
second failure still can destroy the data. If second failure happened
and BIOS really implements RAID0+1 and unable to boot, all that required
is replace failed disks, boot from any FreeBSD install disk and run
rebuild from the command line.

> On 1/15/2013 11:35, Alexander Motin wrote:
>> Please, be my guest to show me where atacontrol(8) controls any hardware
>> RAID controller, or anything except ataraid(4) at all.

-- 
Alexander Motin
Comment 15 landsidel.allen 2013-01-15 18:53:58 UTC
On 1/15/2013 13:10, Alexander Motin wrote:

> You may have some point from the boot side, but do you have reliable
> information about which controllers support RAID0+1 and which RAID10?

Not beyond what the techdocs say for a given card.  Is that a valid 
reason to present them as the same to the user?

> Also, if user got single failure in RAID10, it
> should not feel much more comfortable then if it would be RAID0+1, as
> second failure still can destroy the data

This is simply not true.  I currently have two 12-disk RAID-10 arrays.  
A failure of one disk (which has already happened) leaves ten other 
in-use disks that could potentially fail without causing data loss.  If 
that system were RAID0+1, after a single disk fails the chance that 
another disk failure will result in downtime and data loss is 100% -- 
not 9%.

RAID-10 is *much* safer than RAID0+1.  The more disks you add, the safer 
it gets.  The more disks you add to a 0+1, the *less* safe it gets.

It seems you still aren't really grasping the difference, regardless of 
HW vs. SW questions.

> all that required
> is replace failed disks, boot from any FreeBSD install disk and run
> rebuild from the command line

This strikes me as a comment from someone not experienced in working 
with colocated/remote systems.  Without an IPMI subsystem that can 
remotely mount disk images, you're talking minutes (or hours) of 
downtime while a support technician brings a bootable optical or usb 
device to the machine and sets up the KVM-over-IP.

Presenting RAID10 and RAID0+1 as the same thing is *wrong*.  They aren't 
the same.

I will leave it at that.  The project and maintainers can decide to fix 
the issue or not.  I've long since abandoned the machine that had that 
controller and have no vested interest any longer.
Comment 16 Alexander Motin freebsd_committer freebsd_triage 2013-01-15 19:37:55 UTC
On 15.01.2013 20:53, Allen Landsidel wrote:
> On 1/15/2013 13:10, Alexander Motin wrote:
>> You may have some point from the boot side, but do you have reliable
>> information about which controllers support RAID0+1 and which RAID10?
> 
> Not beyond what the techdocs say for a given card.  Is that a valid
> reason to present them as the same to the user?

I see no reason to implement crappy RAID0+1 in software, when I can do
RAID10, just because some unknown salesman told so.  As result, I have
to either lie about my code capabilities that I know for sure, or lie
about RAID BIOS which I know nothing about.

>> Also, if user got single failure in RAID10, it
>> should not feel much more comfortable then if it would be RAID0+1, as
>> second failure still can destroy the data
> 
> This is simply not true.  I currently have two 12-disk RAID-10 arrays. 
> A failure of one disk (which has already happened) leaves ten other
> in-use disks that could potentially fail without causing data loss.  If
> that system were RAID0+1, after a single disk fails the chance that
> another disk failure will result in downtime and data loss is 100% --
> not 9%.

That "100%" depends on how to calculate it. Since number of drives
reduced from 12 to 6, total failure rate from age may also reduce.

> RAID-10 is *much* safer than RAID0+1.  The more disks you add, the safer
> it gets.  The more disks you add to a 0+1, the *less* safe it gets.

Yes, it is safer, I am not challenging that. But with one disk already
down, there is a chance that one more failure will trash everything. It
is just not safe. It is just not RAID6 or triple mirror where you can
quietly tolerate two failures.

>> all that required
>> is replace failed disks, boot from any FreeBSD install disk and run
>> rebuild from the command line
> 
> This strikes me as a comment from someone not experienced in working
> with colocated/remote systems.  Without an IPMI subsystem that can
> remotely mount disk images, you're talking minutes (or hours) of
> downtime while a support technician brings a bootable optical or usb
> device to the machine and sets up the KVM-over-IP.
> 
> Presenting RAID10 and RAID0+1 as the same thing is *wrong*.  They aren't
> the same.
> 
> I will leave it at that.  The project and maintainers can decide to fix
> the issue or not.  I've long since abandoned the machine that had that
> controller and have no vested interest any longer.

Agreed. :) ataraid is almost dead now, and I see no point to polish
cosmetic things there.  In graid I've implemented RAID10 algorithm and
use it in all cases, wherever RAID BIOS claims RAID10 or RAID0+1.  If
some BIOS can't boot after second disk failure, that is bad, but data
still can be restored if it is possible at all.

Thank you for explaining your position. Truth seems to be in the middle,
as always. :)

-- 
Alexander Motin
Comment 17 Enji Cooper freebsd_committer freebsd_triage 2015-11-10 13:48:40 UTC
atacontrol is no more after FreeBSD 9.x. Long live atacam and friends.