CACHE SUBSYSTEMS
In a system such
as
shown in Figure
7-3,
a request for the byte of data at the address
12FFE8H in the main memory
is
handled
as
follows:
1.
The cache controller determines the cache location from the
14
most significant bits
of the index field (FFE8H).
2.
The controller compares the tag field (12H) with the tag stored at location FFE8H
in the cache.
3.
If
the tag matches, the processor reads the least significant byte from the data in the
cache.
4.
If
the tag does not match, the controller fetches the 4-byte block at address
12FFE8H in the main memory and loads it into location
FFE8H
of
the cache,
replacing the current block. The controller must also change the tag stored at loca-
tion
FFE8H
to 12H. The processor then reads the least significant byte from the
new block.
Any address whose index field
is
FFE8H can be loaded into the cache only at location
FFE8H; therefore, the cache controller makes only one comparison to determine if the
requested word
is
in the cache. Note that the address comparison requires only the tag
field of the address. The index field need
not
be compared because anything stored in
~ache
location
FFE8H
has an index field of FFE8H. The direct mapped cache uses
direct addressing to eliminate all but one comparison operation.
The direct mapped cache, however,
is
not without drawbacks.
If
the processor in the
example above makes frequent requests for locations 12FFE8H and 44FFE8H, the con-
troller must access the main memory frequently, because only one of these locations can
be in the cache at a time. Fortunately, this sort
of
program behavior
is
infrequent enough
that the direct mapped cache, although offering poorer performance than a fully asso-
ciative cache, still provides an acceptable performance
at
a much lower cost.
7.2.3 Set Associative Cache
The set associative cache compromises between the extremes of fully associative and
direct mapped caches. This type of cache has several sets (or groups) of direct mapped
block~
that operate as several direct mapped caches in parallel.
For
each cache index,
there are several block locations allowed, one in each set. A block of data arriving from
the main memory can
go
into a particular block location of any set. Figure 7-4 shows the
organization for a 2-way set associative cache.
With the same amount
of
memory
as
the direct mapped cache of the previous example,
the set associative cache contains half
as
many locations, but allows two blocks for each
location. The index field
is
thus reduced to
'j5
bits, and the extra bit becomes part of the
tag field.
7-6