TNETX4090
ThunderSWITCH II
9-PORT 100-/1000-MBIT/S ETHERNET
SWITCH
SPWS044E – DECEMBER 1997 – REVISED AUGUST 1999
33
POST OFFICE BOX 655303 • DALLAS, TEXAS 75265
adaptive performance optimization (APO)
Each Ethernet MAC incorporates APO logic. This can be enabled on an individual port basis. When enabled,
the MAC uses transmission pacing to enhance performance (when connected on networks using other transmit
pacing-capable MACs). Adaptive performance pacing introduces delays into the normal transmission of
frames, delaying transmission attempts between stations, reducing the probability of collisions occurring during
heavy traffic (as indicated by frame deferrals and collisions), thereby, increasing the chance of successful
transmission.
When a frame is deferred, suffers a single collision, multiple collisions, or excessive collisions, the pacing
counter is loaded with an initial value of 31. When a frame is transmitted successfully (without a deferral, single
collision, multiple collision, or excessive collision), the pacing counter is decremented by 1, down to 0.
With pacing enabled, a new frame is permitted to immediately [after one inter-packet gap (IPG)] attempt
transmission only if the pacing counter is 0. If the pacing counter is not 0, the frame is delayed by the pacing
delay (a delay of approximately four interframe gap delays).
NOTE:
APO affects only the IPG preceding the first attempt at transmitting a frame. It does not affect the
backoff algorithm for retransmitted frames.
interframe gap enforcement
The measurement reference for the interpacket gap of 96-bit times is changed, depending on frame traffic
conditions. If a frame is successfully transmitted (without collision), 96-bit times is measured from Mxx_TXEN.
If the frame suffered a collision, 96-bit times is measured from Mxx_CRS.
backoff
The device implements the IEEE Std 802.3 binary exponential backoff algorithm.
receive versus transmit priority
The queue manager prioritizes receive and transmit traffic as follows:
Highest priority is given to frames that currently are being transmitted. This ensures that transmitting frames
do not underrun.
Next priority is given to frames that are received if the free-buffer stack is not empty. This ensures that
received frames are not dropped unless it is impossible to receive them.
Lowest priority is given to frames that are queued for transmission but have not yet started to transmit.
These frames are promoted to the highest priority only when there is spare capacity on the memory bus.
The NM port receives the lowest priority to prevent frame loss during busy periods.
The memory bus has enough bandwidth to support the two highest priorities. The untransmitted frame queues
grow when frames received on different ports require transmission on the same port(s) and when frames are
repeatedly received on ports that are at a higher speed than the ports on which they are transmitted. This is likely
to be exacerbated by the reception of multicast frames, which typically require transmission on several ports.
When the backlog grows to such an extent that the free buffer stack is nearly empty, flow control is initiated (if
it has been enabled) to limit further frame reception.