IBM Hub/Switch Switch User Manual


 
Chapter 2 HPSS Planning
HPSS Installation Guide September 2002 141
Release 4.5, Revision 2
2.11.12 Cross Cell
Cross Cell Trust should be established with the minimal reasonable set of cooperating partners (N-
squared problem). Excessive numbers of Cross Cell connections may diminish Security and may
cause performance problems due to Wide Area Network delays. The communication paths
between cooperating cells should be reliable.
Cross Cell Trust must exist to take advantage of the HPSS Federated Name Space facilities.
2.11.13 DFS
DFS performance for HPSS is dependent on a number of factors: fileset type (mirrored or archived),
CPU performance, memory throughput rates, DFS client caching, etc. Mirrored filesets will
perform at HPSS rates for name space changes. Name space accesses will perform at normal DFS
rates. Archived filesets will typically perform close to DFS rates for both name space changes and
accesses. For both mirrored and archived filesets, access and changes will perform at DFS rates
when data is resident, but will be delayed if HPSS must stage the data onto the Episode disks.
When setting up an aggregate, it is suggested that the fragment size be set to 1024 and the blocksize
be set to 8192. These are the defaults and have been tested much more thoroughly than other
settings. An important factor to consider is that any file smaller than the blocksize currently can not
be purged from the Episode disk, and setting the blocksize larger than 8192 may cause space and
resource problems on the disk. (This is a limitation of the Episode implementation of XDSM on
which the HPSS/DFS code is implemented).
During testing the biggest gains were realized by altering the DFS client caching. A large memory
cache instead of a disk cache may improve performance dramatically if the clientmachine can spare
memory for client caching buffers. For more information on DFS configuration for AIX please refer
to the DCE document “Distributed File Service Administration Guide and Reference”.
Since HPSS must read DFS anodes to determine which files to migrate or purge, it is suggested that
aggregates kept to a maximum size of 250,000 files and directories. This will allow the migration
and purge algorithms to determine which files to process in a reasonable amount of time. With
current Episode limitations on the amount of space allowed for anodes per aggregate and the
design of the HPSS DFS code, the maximum number of anodes per HPSS-managed aggregate is
around 1,000,000 files (2GB / 2K per migrated file). When planning the system, assume that it may
not be possible to expand an aggregate to accommodate more files when this limit is reached. It may
be necessary to add a new aggregate. In fact, since migration and purge are aggregate-based, the
system may perform better with data distributed among a larger number of well-balanced
aggregates than with all data concentrated a few large ones.
2.11.14 XFS
XFS performance for HPSS is dependent on a number of factors: CPU performance, available
memory, disk speeds, etc.. XFS archived filesets will typically perform close to native XFS rates for
both namespace and data activity; however, accessing or modifying data for files which have been
migrated to HPSS and purged from XFS will be delayed while the file’s data is staged.
The XFS HDM keeps an internal record of migration and purge candidates and will therefore be
capable of quickly completing migration and purge runs which would otherwise take a good deal