IBM 890 Network Card User Manual


 
49
GPDS/PPRC HyperSwap
The GDPS/PPRC HyperSwap function is designed to
broaden the continuous availability attributes of GDPS/
PPRC by extending the Parallel Sysplex redundancy to
disk subsystems. The HyperSwap function is enabled to
mask planned and unplanned disk and site reconfi gura-
tions by transparently switching to use the secondary
PPRC volumes. The HyperSwap function is designed to be
controlled by complete automation, allowing all aspects
of the site switch to be controlled via GDPS. Large con-
fi gurations can be supported, as HyperSwap has been
designed to provide capacity and capability to swap large
numbers of disk devices very quickly. The important ability
to re-synchronize incremental disk data changes, in both
directions, between primary/secondary PPRC disks is pro-
vided as part of this function.
The planned HyperSwap function provides the ability to
transparently switch all primary PPRC disk subsystems
with the secondary PPRC disk subsystems for a planned
switch confi guration. It enables disk confi guration mainte-
nance and planned site maintenance without requiring any
applications to be quiesced. The unplanned HyperSwap
function contains additional function designed to transpar-
ently switch to use secondary PPRC disk subsystems, in
the event of unplanned outages of the primary PPRC disk
subsystems or a failure of the site containing the primary
PPRC disk subsystems. With unplanned HyperSwap func-
tion, disk subsystem failures no longer constitute a single
point of failure for an entire sysplex. If applications are
cloned and exploiting data sharing across the two sites,
the GDPS/PPRC unplanned HyperSwap capability, lays
the foundation for continuous availability, even in the event
of a complete site failure. In the event of a complete failure
of the site where the primary disk resides, the systems in
the site with the secondary disks can continue to remain
active even though workload running on these systems
needs to be restarted. An improvement in the Recovery
Time Objective (RTO) can be accomplished.
With the release of GDPS/PPRC V3.2, the HyperSwap
function was enhanced to exploit the PPRC Failover/
Failback function. For planned reconfi gurations, PPRC
Failover/Failback can help reduce the overall elapsed time
to switch the disk subsystems, which can then reduce the
time that applications may not be available to users. For
unplanned reconfi gurations, PPRC Failover/Failback allows
the secondary disks to be confi gured in the suspended
state after the switch, thus eliminating the need to per-
form a full copy of the data when reestablishing the PPRC
mirror in the reverse direction. The window during which
critical data is left without PPRC protection following an
unplanned reconfi guration is thereby minimized.
GDPS/PPRC management for open systems LUNs (Logi-
cal Unit Numbers): GDPS/PPRC technology has been
extended to manage a heterogeneous environment of
z/OS and open systems data. If installations share their
disk subsystems between the z/OS and open systems
platforms, GDPS/PPRC, running in a z/OS system, can
manage the PPRC status of devices that belong to the
other platforms and are not even defi ned to the z/OS
platform. GDPS/PPRC can also provide data consistency
across both z/OS and open systems data.
GDPS supports PPRC over Fiber Channel links: GDPS/
PPRC supports Enterprise Storage Server (ESS) PPRC
over Fiber Channel Protocol (FCP). It is expected that the
distance between sites can be increased while maintain-
ing acceptable application performance, since PPRC over
FCP requires only one protocol exchange compared to
two or three exchanges when using PPRC over ESCON.
The effi ciency of the FCP protocol is also expected to help
lower the total cost of ownership, since two PPRC FCP
links are considered suffi cient for most workloads, which
can allow a reduction in cross-site connectivity.