IBM Hub/Switch Switch User Manual


 
Chapter 2 HPSS Planning
HPSS Installation Guide September 2002 137
Release 4.5, Revision 2
for most HPSS installations. A value of 16000 or 32000 is more reasonable and can be changed by
adding a -b 16000 to the arguments int /etc/rc.encina. Be sure to update the /etc/security/limits file
(and reboot) to allow the system to handle the added processing size that both this and the increase
in SFS threads will create. In the default section, the values for data should be increase to data =
524288 and for rss should be increased to rss = 262144. These are recommended values from
Transarc.
2.11.3 Workstation Configurations
SMP Node/Machine. SFS can easily become CPU bound in highly utilized HPSS systems. HPSS
transaction speeds can be limited by the processing capacity of the CPU/Node where the SFS
server resides. It is recommended that the placement of the SFS server be on a machine with
multiple processors rather than on a high-end single processor machine.
HPSS/SFS Server Proximity. The placement of HPSS Servers with their corresponding SFS Server
can affect the performance of the HPSS system. SomeHPSS Servers which are SFS intensive (Name
Server) are best served with SFS being local. The following HPSS servers should be co-located with
the SFS server on the same machine/node:
Name Server
BFS Server
Disk & Tape Storage Server
Other HPSS servers are not so metadata intensive and can be distributed in the configuration
without as much concern.
Storage Media. Allocating storage for the log and data volume(s) should be carefully planned. The
log and data volumes need to be on separate sets of disk not only for data integrity in case of a
media failure but also for performance. In addition, the SFS log archive files should be backed up
by physical media separate from both the log and data volume physical media storage. For those
sites with a high number of anticipated transactions, the distribution of the metadata files across
multiple data volumes should correspond with backing them with separate physical media and I/
O adapters to prevent any interference. Ideally, the configuration of the SFS storage should try to
distribute the disk activity across as many I/O adapters and hard drives as possible.
When determining what configuration is best, keep in mind that some devices may behave better
in a mirrored configuration rather than setup as RAID. Performance tests to monitor data and
transactions rates of a given set of disks and I/O adapters should be made before permanently
configuring the resources for SFS.
2.11.4 Bypassing Potential Bottlenecks
HPSS performance is influenced by many factors, such as device and network speeds,
configuration and power of HPSS server machines, Encina SFS server configuration, storage
resource configuration, and client data access behavior.
HPSS provides mechanisms to bypass potential bottlenecks in the performance of data transfers,
given that the system configuration provides the additional resources necessary. For example, if the
performance of a single disk device is the limiting factor in a transfer between HPSS and a client