View | Details | Raw Unified | Return to bug 41934 | Differences between
and this patch

Collapse All | Expand All

(-)chapter.sgml.fixes (-11 / +11 lines)
Lines 55-61 Link Here
55
55
56
56
57
    <para>Disks are getting bigger, but so are data storage requirements.
57
    <para>Disks are getting bigger, but so are data storage requirements.
58
      Often you ill find you want a file system that is bigger than the disks
58
      Often you will find you want a file system that is bigger than the disks
59
      you have available.  Admittedly, this problem is not as acute as it was
59
      you have available.  Admittedly, this problem is not as acute as it was
60
      ten years ago, but it still exists.  Some systems have solved this by
60
      ten years ago, but it still exists.  Some systems have solved this by
61
      creating an abstract device which stores its data on a number of disks.</para>
61
      creating an abstract device which stores its data on a number of disks.</para>
Lines 70-76 Link Here
70
      disks.</para>
70
      disks.</para>
71
71
72
    <para>Current disk drives can transfer data sequentially at up to
72
    <para>Current disk drives can transfer data sequentially at up to
73
      30 MB/s, but this value is of little importance in an environment
73
      70 MB/s, but this value is of little importance in an environment
74
      where many independent processes access a drive, where they may
74
      where many independent processes access a drive, where they may
75
      achieve only a fraction of these values.  In such cases it is more
75
      achieve only a fraction of these values.  In such cases it is more
76
      interesting to view the problem from the viewpoint of the disk
76
      interesting to view the problem from the viewpoint of the disk
Lines 85-94 Link Here
85
85
86
    <para><anchor id="vinum-latency">
86
    <para><anchor id="vinum-latency">
87
      Consider a typical transfer of about 10 kB: the current generation of
87
      Consider a typical transfer of about 10 kB: the current generation of
88
      high-performance disks can position the heads in an average of 6 ms.  The
88
      high-performance disks can position the heads in an average of 3.5 ms.  The
89
      fastest drives spin at 10,000 rpm, so the average rotational latency
89
      fastest drives spin at 15,000 rpm, so the average rotational latency
90
      (half a revolution) is 3 ms.  At 30 MB/s, the transfer itself takes about
90
      (half a revolution) is 1.75 ms.  At 30 MB/s, the transfer itself takes about
91
      350 &mu;s, almost nothing compared to the positioning time.  In such a
91
      150 &mu;s, almost nothing compared to the positioning time.  In such a
92
      case, the effective  transfer rate drops to a little over 1 MB/s and is
92
      case, the effective  transfer rate drops to a little over 1 MB/s and is
93
      clearly highly dependent on the transfer size.</para>
93
      clearly highly dependent on the transfer size.</para>
94
94
Lines 151-157 Link Here
151
      For example, the first 256 sectors may be stored on the first disk, the
151
      For example, the first 256 sectors may be stored on the first disk, the
152
      next 256 sectors on the next disk and so on.  After filling the last
152
      next 256 sectors on the next disk and so on.  After filling the last
153
      disk, the process repeats until the disks are full.  This mapping is called
153
      disk, the process repeats until the disks are full.  This mapping is called
154
      <emphasis>striping</emphasis> or RAID-0.
154
      <emphasis>striping</emphasis> or <acronym>RAID-0</acronym>.
155
155
156
    <footnote>
156
    <footnote>
157
      <indexterm>
157
      <indexterm>
Lines 250-256 Link Here
250
	</figure>
250
	</figure>
251
      </para>
251
      </para>
252
252
253
      <para>Compared to mirroring, RAID-5 has the advantage of requiring
253
      <para>Compared to mirroring, <acronym>RAID-5</acronym> has the advantage of requiring
254
	significantly less storage space.  Read access is similar to that of
254
	significantly less storage space.  Read access is similar to that of
255
	striped organizations, but write access is significantly slower,
255
	striped organizations, but write access is significantly slower,
256
	approximately 25% of the read performance.  If one drive fails, the array
256
	approximately 25% of the read performance.  If one drive fails, the array
Lines 470-476 Link Here
470
	    the system automatically assigns names derived from the plex name by
470
	    the system automatically assigns names derived from the plex name by
471
	    adding the suffix <emphasis>.s</emphasis><emphasis>x</emphasis>, where
471
	    adding the suffix <emphasis>.s</emphasis><emphasis>x</emphasis>, where
472
	    <emphasis>x</emphasis> is the number of the subdisk in the plex.  Thus
472
	    <emphasis>x</emphasis> is the number of the subdisk in the plex.  Thus
473
	    Vinum gives this subdisk the name <emphasis>myvol.p0.s0</emphasis></para>
473
	    Vinum gives this subdisk the name <emphasis>myvol.p0.s0</emphasis>.</para>
474
	</listitem>
474
	</listitem>
475
      </itemizedlist>
475
      </itemizedlist>
476
476
Lines 736-743 Link Here
736
      </listitem>
736
      </listitem>
737
737
738
      <listitem>
738
      <listitem>
739
	<para>The directories <devicename>/dev/vinum/plex</devicename> and
739
	<para>The directories <devicename>/dev/vinum/plex</devicename>,
740
	  <devicename>/dev/vinum/sd</devicename>, 
740
	  <devicename>/dev/vinum/sd</devicename>, and
741
	  <devicename>/dev/vinum/rsd</devicename>, which contain block device
741
	  <devicename>/dev/vinum/rsd</devicename>, which contain block device
742
	  nodes for each plex and block and character device nodes respectively 
742
	  nodes for each plex and block and character device nodes respectively 
743
	  for each subdisk.</para>
743
	  for each subdisk.</para>

Return to bug 41934