IBM Hub/Switch Switch User Manual


 
Chapter 2 HPSS Planning
140 September 2002 HPSS Installation Guide
Release 4.5, Revision 2
of sites. Usually, the only time the policy values need to be altered is when there is unusual HPSS
setup.
The Location Serveritself will give warning when a problem is occurring byposting alarms toSSM.
Obtain the information for the Location Server alarms listed in the HPSS Error Manual. To get a
better view of an alarm in its context, view the Location Server's statistics screen.
If the Location Server consistently reports a heavy load condition, increase the number of request
threads and recycle the Location Server. Remember to increase the number of threads on the
Location Server's basic server configuration screen as well. If this doesn't help, consider replicating
the Location Server on a different machine. Note that a heavy load on the Location Server should
be a very rare occurrence.
2.11.10 Logging
Excessive logging by the HPSS servers can degrade the overall performance of HPSS. If this is the
case, it may be desirable to limit the message types that are being logged by particular servers. The
Logging Policy can be updated to control which message types are logged. A default Log Policy
may be specified to define which messages are logged. Typically, Trace, Security, Accounting,
Debug, and Status messages are not logged. Other message types can also be disabled. Once the
Logging Policy is updated for one or more HPSS servers, the Log Clients associated with those
servers must be reinitialized.
2.11.11 MPI-IO API
MPI-IO client applications must be aware of HPSS Client API performance on certain kinds of data
transfers (see Section 2.11.7), which is basically that HPSS is optimized for transferring large,
contiguous blocks of data.
The MPI-IO interface allows for many kinds of transfers that will not perform well over the HPSS
file system. In particular, MPI-IO’s use of file types enables specification of scatter-gather
operations on files, but when performed as noncollective reads and writes, these discontiguous
accesses will result in suboptimal performance in the best case, and in failure to complete the
access, if excessive file fragmentation results, in the worst case. Collective I/O operations may be
able to minimize or eliminate performance problems that would result from equivalent
noncollective I/O operations by coalescing discontiguous accesses into a contiguous one.
An MPI-IO application should make use of HPSS environment variables (see Section 7.1: Client API
Configuration on page 413) and file hints at open time to secure the best match of HPSS resources to
a given task, as described in the HPSS Programmers Reference, Volume 1.
HPSS MPI-IO includes an automatic caching facility for files that are opened using
MPI_MODE_UNIQUE_OPEN when each participating client node has a unique view of the
opened file. When caching is enabled, performance for small data accesses can be significantly
improved, provided the application makes reasonable use of locality of references. MPI-IO file
caching is described in the HPSS Programmer’s Reference, Volume 1.