IBM Hub/Switch Switch User Manual


 
Chapter 1 HPSS Basics
HPSS Installation Guide September 2002 31
Release 4.5, Revision 2
Storage Subsystems will effectively be running an HPSS with a single Storage Subsystem. Note that
sites are not required to use multiple Storage Subsystems.
Since the migration/purge server is contained within the storage subsystem, migration and purge
operate independently in each storage subsystem. If multiple storage subsystems exist within an
HPSS, then there are several migration/purge servers operating on each storage class. Each
migration/purge server is responsible for migration and purge for those storage class resources
contained within its particular storage subsystem. Migration and purge runs are independent and
unsynchronized. This principle holds for other operations such as repack and reclaim as well.
Migration and purge for a storage class may be configured differently for each storage subsystem.
It is possible to set up a single migration or purge policy which applies to a storage class across all
storage subsystems (to make configuration easier), but it is also possible to control migration and
purge differently in each storage subsystem.
Storage class thresholds may be configured differently for each storage subsystem. It is possible to
set up a single set of thresholds which apply to a storage class across all storage subsystems, but it
is also possible to control the thresholds differently for each storage subsystem.
1.3.4 HPSS Infrastructure
The HPSS infrastructure items (see Figure 1-3) are those components that “glue together” the
distributed servers. While each HPSS server component provides some explicit functionality, they
must all work together to provide users with a stable, reliable, and portable storage system. The
HPSS infrastructure components common among servers that tie servers together are discussed
below.
Distributed Computing Environment (DCE). HPSS uses the Open Software Foundation's
Distributed Computing Environment (OSF DCE) as the basic infrastructure for its
architecture and high-performance storage system control. DCE was selectedbecause of its
wide adoption among vendors and its near industry-standard status. HPSS uses the DCE
Remote Procedure Call (RPC) mechanism for control messages and the DCE Threads
package for multitasking. The DCE Threads package is vital for HPSS to serve large
numbers of concurrent users and to enable multiprocessing of its servers. HPSS also uses
DCE Security as well as Cell and Global Directory services.
Most HPSS servers, with the exception of the MVR, PFTPD, and logging services (see
below), communicate requests and status (control information) via RPCs. HPSS does not use
RPCs to move user data. RPCs provide a communication interface resembling simple, local
procedure calls.
Transaction Management. Requests to perform actions, such as creating bitfiles or
accessing file data, result in client-server interactions between software components. The
problem with distributed servers working together on a common job is that one server may
fail or not be able to do its part. When such an event occurs, it is often necessary to abort
the job by backing off all actions made by all servers on behalf of the job.
Transactional integrity to guarantee consistency of server state and metadata is required in
HPSS in case a particular component fails. As a result, a product named Encina, from
Transarc Corporation, was selected to serve as the HPSS transaction manager. This
selection was based on functionality and vendor platform support. Encina provides begin-
commit-abort semantics, distributed two-phase commit, and nested transactions. It