Chapter 3. RAS 53
array site (where the S stands for spare). A four disk array also effectively uses 1 disk for
parity, so it is referred to as a 3+P array.
In a DS6000, a RAID-5 array built on two array sites will contain either seven disks or eight
disks, again depending on whether the array sites chosen had pre-allocated spares. A seven
disk array effectively uses one disk for parity, so it is referred to as a 6+P array. The reason
only 7 disks are available to a 6+P array is that the eighth disk in the two array sites used to
build an array, was already a spare. This is referred to as a 6+P+S array site. An 8 disk array
also effectively uses 1 disk for parity, so it is referred to as a 7+P array.
Drive failure
When a disk drive module (DDM) fails in a RAID-5 array, the device adapter starts an
operation to reconstruct the data that was on the failed drive onto one of the spare drives. The
spare that is used is chosen based on a smart algorithm that looks at the location of the
spares and the size and location of the failed DDM. The rebuild is performed by reading the
corresponding data and parity in each stripe from the remaining drives in the array,
performing an exclusive-OR operation to recreate the data, then writing this data to the spare
drive.
While this data reconstruction is going on, the device adapter can still service read and write
requests to the array from the hosts. There may be some degradation in performance while
the sparing operation is in progress, because some controller and switched network
resources are being used to do the reconstruction. Due to the switched architecture, this
effect will be minimal. Additionally, any read requests for data on the failed drive require data
to be read from the other drives in the array to reconstruct the data. The remaining requests
are satisfied by reading the drive containing the data in the normal way.
Performance of the RAID-5 array returns to normal when the data reconstruction onto the
spare device completes. The time taken for sparing can vary, depending on the size of the
failed DDM and on the workload on the array and the controller.
3.3.2 RAID-10 overview
RAID-10 is not as commonly used as RAID-5, mainly because more raw disk capacity is
needed for every GB of effective capacity.
RAID-10 theory
RAID-10 provides high availability by combining features of RAID-0 and RAID-1. RAID-0
optimizes performance by striping volume data across multiple disk drives at a time. RAID-1
provides disk mirroring, which duplicates data between two disk drives. By combining the
features of RAID-0 and RAID-1, RAID-10 provides a second optimization for fault tolerance.
Data is striped across half of the disk drives in the RAID-10 array. The same data is also
striped across the other half of the array, creating a mirror. Access to data is usually
preserved, even if multiple disks fail. RAID-10 offers faster data reads and writes than RAID-5
because it does not need to manage parity. However, with half of the DDMs in the group used
for data and the other half to mirror that data, RAID-10 disk groups have less capacity than
RAID-5 disk groups.
RAID-10 implementation in the DS6000
In the DS6000 the RAID-10 implementation is achieved by using one or two array sites (either
four or eight DDMs). If a single array site array is created and that site includes one spare,
then only two DDMs will be available for this array. This makes the array a 1+1 array that is
effectively just RAID-1. The other two DDMs will both be spares. If an array site with no
spares is selected then the array will be 2+2.