IBM Hub/Switch Switch User Manual


 
Chapter 2 HPSS Planning
126 September 2002 HPSS Installation Guide
Release 4.5, Revision 2
SFS Sizing Assumptions. The only assumption specific to SFS that should be reviewed is the Leaf
Page Load Factor field. This field defines, on average, how “full” each SFS leaf page will be. A value
of 50%is overly conservative since SFS will likely utilize each leaf page more than this. A value over
80% would be too optimistic and would likely cause insufficient disk space to be allocated to
metadata. A value between 60 to 75% is probably a good compromise.
2.10.2.21.2 Sizing Computations
The second worksheet in the metadata sizing spreadsheet calculates the projected total number of
records for each metadata file and the corresponding amount of required disk space, based on the
assumptions previously entered. The spreadsheet allows the record count for any given metadata
file to be manually overridden, if desired, and allows the required disk space to be allocated to one
of 15 SFS data volumes, allowing individual SFS data volumes to be properly sized. For systems
which will easily exceed several GBs in size, it is recommended that each site consider moving files
associated with subsystems to separate SFS servers. The data volume selection is not necessarily
restricted to just one SFS instance. It is a general way to distinguish files from one data volume from
another whether of the same SFS server or not.
The various columns in this worksheet are described below.
Subsystem/Metadata File—This column lists all the HPSS metadata files, preceded by the acronym
for the specific HPSS server associated with it or the primary user of the file. Due to performance
gains provided by SFS and DCE when the SFS server and client are on the same machine, it is
generally advisable to allocate all metadata files primarily used by a given HPSS server with the
SFS server running on that same machine.
Total Log Clients The total number of Log Clients that will be used, which will be equal to the total
number of nodes running any type of HPSS server.
Total Metadata
Monitor Servers
The total number of Metadata Monitor servers, which should equal the total number of
Encina SFS servers used by the HPSS system.
Total Movers The total number of Movers that will be created.
Total NFS Mount
Daemons
The total number of NFS Mount Daemons that will be configured.
Total NFS Servers The total number of NFS servers that will be configured.
Total PVR Servers The total number of PVR servers that will be used.
Total Storage
Servers
The total number of Storage Servers. Normally, sites will have at least one disk Storage
Server and at least one tape Storage Server.
Total Migration/
Purge Servers
The total number of Migration/Purge Servers. Normally, sites will have one MPS
server per subsystem.
Total Gatekeeper
Servers
The total number of Gatekeeper Servers that will be used. Sites may have 0 or more
Gatekeeper Servers.
Table 2-7 HPSS Static Configuration Values (Continued)
Variable Description