CACHE SUBSYSTEMS
The controller must also decide which block of the cache to overwrite when a block fetch
is
executed. There are several locations, rather than just one, in which the data from the
main memory could be written.
Three
common approaches for choosing the block to
overwrite are
as
follows:
• Overwriting the least recently accessed block. This approach requires the controller
to maintain least-recently used (LRU) bits that indicate the block to overwrite. These
bits must be updated
by
the cache controller on each cache transaction.
• Overwriting the blocks in sequential order (FIFO).
• Overwriting a block chosen at random.
The performance of each strategy depends upon program behavior. Any of the three
strategies
is
adequate for most set associative cache designs; however, the LRU algo-
rithm tends to provide the highest hit rate.
7.3 CACHE UPDATING
In
a cache system, two copies of the same data can exist at once, one in the cache and
one in the main memory.
If
one copy
is
altered and the other
is
not, two different sets of
data become associated with the same address. A cache must contain an updating system
to prevent old data values (called stale data) from being used. Otherwise, the situation
shown in Figure
7-5
could occur. The following sections describe the write-through and
write-back methods of updating the main memory during a write operation to the cache.
7
.3.1
Write-Through System
In
a write-through system, the controller copies write data to the main memory immedi-
ately after it
is
written to the cache. The result
is
that the main memory
always
contains
valid data. Any block
in
the cache can be overwritten immediately without data loss.
The write-through approach
is
simple, but performance
is
decreased due to the time
required to write the data to main memory and increased bus traffic (which
is
significant
in multi-processing systems).
7.3.2 Buffered Write-Through System
Buffered
write~through
is
a variation of the write-through technique.
In
a buffered write-
through system, write accesses to the main memory are buffered, so that the processor
can begin a new cycle before the write cycle to the main memory
is
completed.
If
a write
access
is
followed by a read access that
is
a cache hit, the read access can be performed
while the main memory
is
being updated. The decrease in performance of the write-
through system
is thus avoided. However, because usually only a single write access can
be buffered, two consecutive writes to the main memory
will
require the processor to
wait. A write followed
by
a read miss will also require the processor to wait.
7-8