Congestion, Link Efficiency, Traffic Policing and Shaping

There are three main quality of service concepts: congestion avoidance, traffic shaping and policing, and link efficiency mechanisms.

Congestion Avoidance:

Tail Drop
Tail drop causes problems with the network because it is not “smart” in how it deals with dropping traffic. Once the hardware and software queues become full it just starts dropping packets regardless of application, destination or need.

Global synchronization and TCP starvation are the result of tail drop.

  • TCP Global Synchronization — When tail drop occurs all TCP based applications go into slow start, bandwidth use drops and queues clear. Once the queues clear flows increase their TCP send window until they start to receive dropped packets again. It results in waves peak utilization coupled with sudden drops in utilization, resulting in waves of traffic.
  • TCP Starvation — TCP tries to work well in the network by backing off on bandwidth when packets are dropped, called slow start, but UDP does not. As a result when TCP traffic slows to deal with dropped traffic, UDP traffic does not slow, resulting in queues being filled by UDP packets, starving TCP of bandwidth.

Random Early Detection (RED)
Statistically RED drops more packets from aggressive flows than from slower flows and only flows who have packets dropped slow down, avoiding global synchronization.

RED measures the average queue depth to decide whether or not to drop packets because the average queue depth changes more slowly than the actual depth.

RED has three configuration parameters: minimum threshold, maximum threshold, and mark probability denominator (MPD).

Once the depth reaches the minimum threshold packets begin to be dropped and once it exceeds the maximum threshold there is effectively tail drop. Everything in between is governed by the mark probability denominator.

The mark probability denominator sets the maximum percentage of packets discarded by RED. IOS calculates the maximum percentage using the formula 1/MPD. For instance, an MPD of 10 yields a calculated value of 1/10, meaning the maximum discard rate is 10 percent.

The following table is from page 425 of the QoS Exam Certification Guide and shows how the minimum threshold, maximum threshold and queue depth all interact.

Average Queue Depth Versus Thresholds Action Name
Average < Minimum Threshold No packets dropped. No Drop
Minimum Threshold < Average Depth < Maximum Threshold A percentage of packets are dropped, the percentage grows linearly as the average depth grows. Random Drop
Average Depth > Maximum Threshold All new packets are dropped, like tail drop. Full Drop

Weighted Random Early Detection
WRED behaves the same as RED, except that WRED differentiates between IP precedence or DSCP value. The ONT book and the QoS Exam book both cover the same example of WRED, just in different levels of detail. Personally if you understand the concepts from the chart above with the min and max thresholds, the following charts will explain everything. When an interface starts to become congested, WRED discards lower priority traffic with a higher probability. By default in IOS lower precedence flows have smaller minimum thresholds and will therefore begin dropping packets before higher precedence flows. As the queue passes a threshold, for instance 22 packets for packets with a precedence of 1, then packets with precedence 0 and 1 will be dropped.

The tables below are taken from pages 430 and 431 of the QoS Exam Certification Guide.

This table is for IP Precedence based WRED defaults.

Precedence Minimum Threshold Maximum Threshold Mark Probability Denominator Calculated Maximum Percent Discarded
0 20 40 10 10%
1 22 40 10 10%
2 24 40 10 10%
3 26 40 10 10%
4 28 40 10 10%
5 31 40 10 10%
6 33 40 10 10%
7 35 40 10 10%
RSVP 37 40 10 10%

This table is for IOS DSCP-Based WRED defaults.

DSCP Minimum Threshold Maximum Threshold Mark Probability Denominator Calculated Maximum Percent Discarded
AF11, AF21, AF31, AF41 33 40 10 10%
AF12, AF22, AF32, AF42 28 40 10 10%
AF13, AF23, AF33, AF43 24 40 10 10%
AF 37 40 10 10%

Class-Based Weighted Random Early Detection (CBWRED)
CBWRED is configured by applying WRED to CBWFQ. Remember, by default CBWFQ performs tail drop by default. By default WRED is based on IP precedence as seen in the chart above, it has eight profiles pre-defined. To me it is a joy to be able to look at the following configuration and understand it’s meaning. This is how to configure CBWRED from page 159 in the ONT book. Notice that they are just configuring the defaults from IOS as seen in the chart above. By tying it all together the configuration makes more sense.

class-map Business
  match ip precedence 3 4
class-map Bulk
  match ip precedence 1 2
!
policy-map Enterprise
  class Business
   bandwidth percent 30
   random-detect
   random-detect precedence 3 26 40 10
   random-detect precedence 4 28 40 10
  class Bulk
   bandwidth percent 20
   random-detect
   random-detect precedence 1 22 36 10
   random-detect precedence 2 24 36 10
  class class-default
   fair-queue
   random-detect

And the same configuration using DSCP.

class-map Business
  match ip dscp af21 af22 af23 cs2
class-map Bulk
  match ip dscp af11 af12 af13 cs1
!
policy-map Enterprise
  class Business
   bandwidth percent 30
   random-detect dscp-based
   random-detect dscp af21 32 40 10
   random-detect dscp af22 28 40 10
   random-detect dscp af23 24 40 10
   random-detect dscp cs2   22 40 10
  class Bulk
   bandwidth percent 20
   random-detect dscp-based
   random-detect dscp af11 32 36 10
   random-detect dscp af12 28 36 10
   random-detect dscp af13 24 36 10
   random-detect dscp cs1   22 36 10
  class class-default
   fair-queue
   random-detect dscp-based

To verify configuration use the show policy-map interface command.

Traffic Policing and Shaping

Providers commonly police traffic while subscribers commonly shape traffic.

Policing restricts the amount of bandwidth into or out of an interface. Traffic policing discards or re-marks excess packets so that the overall rate defined is not exceeded. Whenever the physical clock rate exceeds the traffic contract, policing may be needed, called subrate access. Policing protects a network from being overrun by traffic.

  • Limit the rate of traffic to less than what an interface can support, subrate access.
  • Rate limit traffic classes.
  • Re-mark traffic to a lower class.

Traffic shaping applies only to outbound traffic. Traffic shaping buffers excess packets in a queue that are then sent at the shaping rate, introducing variable delay and jitter.

  • Shape traffic to match provider policing rate.
  • Slow traffic to a congested destination.
  • Send separate traffic classes at different rates.

Shaping Terminology from page 346 of the QoS Exam Certification Guide.

  • Tc — Committed time interval, measured in milliseconds over which the committed burst (Bc) can be sent.
    Tc = Bc/CIR or Tc = Bc/Shaped Rate
  • Bc — Committed burst size, measured in bits. The amount of traffic that can be sent over time Tc. Commonly defined in the traffic contract.
    Bc = Tc * CIR or Bc = Tc * Shaped Rate
  • CIR — Committed information rate, in bits per second, defined in the traffic contract.
  • Shaped Rate — The rate in bits per second configured. This can be equal to or greater than the CIR.
  • Be — Excess burst size, in bits. The number of bits greater than Bc that can be sent after a period of inactivity.

CIR (bits per second) = Bc (bits) / Tc (second)

Link Efficiency Mechanisms

The bandwidth gained by compression must be more important than the delay added by compression processing or you should not use compression.

Layer 2 Payload Compression

  • Performed on a link by link basis.
  • Compresses the entire layer 2 payload.
  • Increases processing delay due to computation.
  • Decreases serialization delay because packets are smaller.
  • Increases available bandwidth because packets are smaller.
  • Can be performed in both hardware and software, software compression is not recommended.

Header Compression
Generally headers of the packets in a single flow are identical, therefore instead of sending the same header over and over, an index that refers to that entire header is sent instead. On the receiving end, the index is referenced and the real header is married back with the packet to be forwarded.

  • Performed on a link by link basis.
  • Header compression is not CPU intensive.
  • The payload is not compressed, only the headers.
  • Can be used with TCP or RTP.
  • Best when dealing with small packets because proportionally the header takes up a larger percentage of the packet.
  • RTP header compression is good for voice because voice uses smaller packets.

Header Compression Configuration
To configure header compression, the interface command ip tcp header-compression must be configured on both ends of a link.

R1(config-if)#do sh run int s0/0
...output omitted...
interface Serial0/0
 bandwidth 800
 ip address 192.168.112.1 255.255.255.0
 ip tcp header-compression
end
R2(config-if)#do sh run int s0/0        
...output omitted...
interface Serial0/0
 bandwidth 800
 ip address 192.168.112.2 255.255.255.0
 ip tcp header-compression
 clock rate 800000
end

Link Fragmentation and Interleaving (LFI)
LFI tries to lower delay by attacking the problem of serialization delay.
Serialization delay is the time required to send a frame over a physical link. This is not trivial on slower clocked physical interfaces such as 56k links.

LFI reduces serialization delay by making sure large packets do not delay smaller packets. It fragments large packets and interleaves smaller packets between the fragments of the large packets being sent.

The following process for how LFI works is taken from page 477 of the QoS Exam Certification Guide, please note the differentiation between the use of the word packet and frame because it makes a difference in this description.

  • The router fragments the packet into smaller pieces.
  • The router adds the appropriate data-link headers and trailers, including any headers specifically needed for fragmentation support.
  • The length of the resulting frames (including data-link headers/trailers) does not exceed the fragmentation size configured.
  • The router adds these frames to the appropriate queue.
This entry was posted in Routing. Bookmark the permalink.

Leave a comment