Intel IA-32 Computer Accessories User Manual


 
IA-32 Intel® Architecture Optimization
7-34
Conserve Bus Bandwidth
In a multi-threading environment, bus bandwidth may be shared by
memory traffic originated from multiple bus agents (These agents can
be several logical processors and/or several processor cores). Preserving
the bus bandwidth can improve processor scaling performance. Also,
effective bus bandwidth typically will decrease if there are significant
large-stride cache-misses. Reducing the amount of large-stride cache
misses (or reducing DTLB misses) will alleviate the problem of
bandwidth reduction due to large-stride cache misses.
One way for conserving available bus command bandwidth is to
improve the locality of code and data. Improving the locality of data
reduces the number of cache line evictions and requests to fetch data.
This technique also reduces the number of instruction fetches from
system memory.
User/Source Coding Rule 27. (M impact, H generality) Improve data and
code locality to conserve bus command bandwidth.
Using a compiler that supports profiler-guided optimization can
improve code locality by keeping frequently used code paths in the
cache. This reduces instruction fetches. Loop blocking can also improve
the data locality.
Other locality enhancement techniques, see “Memory Optimization
Using Prefetch” in Chapter 6, can also be applied in a multi-threading
environment to conserve bus bandwidth.
Because the system bus is shared between many bus agents (logical
processors or processor cores), software tuning should recognize
symptoms of the bus approaching saturation. One useful technique is to
examine the queue depth of bus read traffic (See “Workload
Characterization” in Appendix A). When the bus queue depth is high,
locality enhancement to improve cache utilization will benefit
performance more than other techniques, such as inserting more
software prefetches or masking memory latency with overlapping bus