Intel IA-32 Computer Accessories User Manual


 
IA-32 Intel® Architecture Optimization
6-20
May consume extra system bandwidth if the application’s memory
traffic has significant portions with strides of cache misses greater
than the trigger distance threshold of hardware prefetch (large-stride
memory traffic).
Effectiveness with existing applications depends on the proportions
of small-stride versus large-stride accesses in the application’s
memory traffic. Preponderance of small-stride memory traffic with
good temporal locality will benefit greatly from the automatic
hardware prefetcher.
Some situations of memory traffic consisting of a preponderance of
large-stride cache misses can be transformed by re-arrangement of
data access sequences (e.g., tiling, packing a sparsely-populated,
multi-dimensional, band array into a one-dimensional array) to alter
the concentration of small-stride cache misses at the expense of
large-stride cache misses to take advantage of the automatic
hardware prefetcher.
Example of Effective Latency Reduction with H/W Prefetch
Consider the situation that an array is populated with data corresponding
to a constant-access-stride, circular pointer chasing sequence (see
Example 6-2). The potential of employing the automatic hardware
prefetching mechanism to reduce the effective latency of fetching a
cache line from memory can be illustrated by varying the access stride
between 64 bytes and the trigger threshold distance of hardware
prefetch when populating the array for circular pointer chasing.
The effective latency reduction for several microarchitecture
implementations is shown in Figure 6-1. For a constant-stride access
pattern, the benefit of the automatic hardware prefetcher begins at half
the trigger threshold distance, and reaches maximum benefit when the
cache-miss stride is 64 bytes.