|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
RAID Technology |
|
|
|
|
|
|
|
|
|
|
RAID Defined |
RAID stands for a redundant array of inexpensive disks. The purpose
of this type of technology is to provide a fault-tolerant method of
keeping data intact. RAID is also designed to provide a faster method
of accessing data. Early RAID levels are: 0, 1, and 5. Later RAID
levels include: 6, 0+1 (or 10), 30, and 50. RAID levels 1 and 5 are
the most common. |
|
|
|
|
|
|
|
|
|
|
|
The most effective way to build an array is to buy drives that are
of the same capacity from the same manufacturer. You can use drives
by different manufacturers, but make sure they have the same spindle
speed (i.e. 7200 RPM, 10000 RPM, etc.) to guarantee no array errors.
It is also possible to use drives of varying capacities, but the array
will be limited to a multiple of the smallest drive. |
|
|
|
|
|
|
|
|
|
RAID Level 0 |
RAID 0 utilizes striping without mirroring or parity, which makes it
non-redundant. A RAID 0 array requires a minimum of two drives.
Data is written (striped) to all the drives in the array at the same
time. If one drive fails, the entire array fails. The single
advantage to this level is sheer performance. |
|
|
|
|
|
|
|
|
|
|
RAID Level 1 |
RAID 1 utilizes mirroring. Data is written to a pair of drives.
Each drive in the array is a duplicate of the other. No striping is
used. If one drive fails, the data can be copied directly over to
the replacement drive to restore the array. Mirroring doubles read
and write performance since data can be read and written to and from
both drives simultaneously. |
|
|
|
|
|
|
|
|
|
|
RAID Level 0+1 |
RAID 0+1 (or 10) involves combining levels 0 and 1. This creates a
stripe / mirror array. Data is written to pairs of drives at one
time. Using this method, performance falls between levels 0 and 1.
A minimum of four drives is needed (two pairs of two drives). I have
often recommended RAID 0+1 because of the tough fault tolerance, but
it is one of the most expensive, requiring double the drives needed in
a RAID 0 or RAID 1 array by itself. However, with the adaption of IDE
and SATA RAID controllers, this option is becoming more
economical. |
|
|
|
|
|
|
|
|
|
|
RAID Level 2 |
RAID 2 is an early implementation of RAID 3 involving sector-striping
of data with ECC correction. However, most drives imbed ECC
correction, making RAID 2 an unnecessary choice for an array. |
|
|
|
|
|
|
|
|
|
|
RAID Level 3 |
RAID 3 utilizes sector-striping of data, as in RAID 2, except the
ECC correction lies within the drives used. A minimum of three drives
is needed. A single drive is dedicated to containing parity
information. If a drive fails, data is recovered by determining the
"exclusive OR (XOR)" of the information recovered. The advantage to
this type of array is recording long records. A RAID 3 array can have
one drive failure. If the parity drive goes bad, the RAID controller
can rebuild it using the existing data drives. If a data drive goes
bad, the RAID controller can rebuild it using the dedicated parity
drive. |
|
|
|
|
|
|
|
|
|
|
RAID Level 4 |
RAID 4 is the same as RAID 3, except large stripes are used. This
allows reads from any drive in the array, except the parity drive.
Since RAID 5 uses shared parity, RAID 4 is not used. |
|
|
|
|
|
|
|
|
|
RAID Level 5 |
RAID 5 is one of the most popular methods for storing redundant data.
This type of array requires a minimum of three drives. It involves
striping data across drives with shared parity. In most cases, a RAID
5 array can have "hot-swappable" hard drives online, in which the RAID
controller can rebuild a failed drive with a "hot spare." A RAID 5
array can have one drive failure. |
|
|
|
|
|
|
|
|
|
|
|
In smaller RAID 5 implementations, lost hard drive space is more
obvious, since 1/X drives are lost to parity (X being the number of
drives in the array). The bigger a RAID 5 array, the less space lost
to parity. However, in my experience, arrays larger than four drives
begin to suffer due to flooding the SCSI bus when performing
writes. |
|
|
|
|
|
|
|
|
|
RAID Level 6 |
RAID 6 is similar to RAID 5, except two levels of parity are used. As
a result, 2/X of the total RAID space is dedicated to parity. A RAID
6 array can have one to two drive failures.
|
|
|
|
|
|
|
|
|
|
|
RAID Level 10 |
As mentioned earlier, RAID 10 is a combination of RAID levels 0 and
1. A RAID 10 array can have one to four drive failures. |
|
|
|
|
|
|
|
|
|
|
RAID Level 30 |
RAID 30 is a combination of RAID levels 0 and 3. Data is spanned on
two separate RAID 3 arrays. A RAID 30 array can have one to four
drive failures. |
|
|
|
|
|
|
|
|
|
|
RAID Level 50 |
RAID 50 is a combination of RAID levels 0 and 5. Data is spanned on
two separate RAID 5 arrays. A RAID 50 array can have one to four
drive failures. |
|
|
|
|
|
|
|
|
|
|
Some RAID Level Comparisons |
RAID levels 0 and 0+1 (10) are the fastest. RAID 1 is fairly fast,
with read speeds being double. But write speeds are closer to that
of having one drive. RAID 3, RAID 5, and RAID 6 lose performance in
writing data due to the parity required. The performance level of a
RAID 5 array in writing data is between 1/3 and 3/5 that of a RAID 1
array. To help with this, some companies set up a RAID 30 or RAID 50
array to help regain some of the lost performance. |
|
|
|
|
|
|
|
|
|
|
|
Since RAID arrays can have limited performance, many SCSI RAID
controllers come with the option to install RAM (or have some
integrated). When setting up the array in the BIOS of the controller,
there is usually an option to have cached I/O, which buffers as much
data as there is installed RAM. This helps making the data stream as
smooth as possible. If copied files are larger than the buffer,
performance will suffer. |
|
|
|
|
|
|
|
|
|
|
|
Implementing a software RAID (vs. hardware RAID) can be a challenging
decision, as the operating system and CPU have to provide the
additional processing power. However, as disk subsystems, such as
Ultra160, Ultra320, SATA-I, and SATA-II have shown improved data
transfer rates and rotational speeds of 10,000 RPM or more, a
software implementation should run fairly smooth. |
|