section 1
The HP Virtual Array accepts new disks while
the array is up and running and accepting I/Os
as with some higher-end traditional arrays.
However, the HP Virtual Array takes it one step
further. Once the disk is inserted, the array
automatically includes that disk into the existing
disk space and stripes all LUNs across that disk.
This means that even without the creation of any
additional LUNs, the array performance will
improve because of the additional available
spindle. Only the HP Virtual Array automatically
adds the new disks to existing LUNs. Further,
any newly created LUNs are also automatically
spread across all the disks in the array, including
the additional disk.
time to implementation:
formatting the array
As mentioned earlier, after new disks are added
to a traditional array, it then takes several hours
to complete the formatting of the RAID group.
During this format phase, no data can be written
to the new LUNs. With some implementations,
the array is offline until all the LUNs have been
formatted. In other implementations, I/Os can
be written to already formatted LUNs even while
other LUNs are going through the format
process, although performance is very slow.
Because executing the disk format command
uses up so much of the array’s internal bandwidth,
array performance is greatly reduced until all
of the disk formatting has been completed.
With HP’s Virtual Array Technology, the array is
immediately available as soon as the LUNs have
been configured. The disk formatting is done as
the writes are done. In other words, as writes
are sent to disk, the formatting is accomplished
for only those blocks being written to. This means
that while there is a small hit to performance for
that individual write, there is very, very little
impact on overall array performance.
automating the cache
parameters
1
Configuring a traditional array typically requires
setting the cache parameters such as the percentage
of read and write cache, the size of the cache
pages, and, in some cases, the allocation of cache
to specific LUNs. In making these determinations,
there is ample opportunity for error.
With HP’s Virtual Arrays, all of this is preset and
automatic. And this means that all the parameters
within the array are tuned to work in unison with
the stripe size and the array hardware. First, the
cache is set at 80% read and 20% write, is
shared between controllers, and is treated as a
“pool.” Second, the cache page size is set at
64K and is set to automatically destage to disk
every 4 seconds whether the page is full or not.
The 64K size minimizes the number of I/Os to
the back-end in sequential environments and
provides a carefully calculated balance within the
array between the number of cache pages and the
speed of the back-end in random environments.
performance
Traditional arrays are susceptible to “hot spots” and
to changes in the environment that make the initial
configuration obsolete. The HP Virtual Array virtually
eliminates these critical performance issues.
First, the HP Virtual Array is far less likely to
experience a hot spot—in other words, it will
almost never experience a condition where a
few disk drives become a performance bottleneck
in the array. Here’s why: the virtual array
always (and automatically) stripes all of the
LUNs across all of the disks in the RAID group.
For example: assume a virtual array loaded with
a total of 60 disks had 30 disks in each of its
two RAID/redundancy groups. Every LUN in
that group would be spread across all 30 disks.
1.5
hp storage white paper