Category Archives: QOS / COS

QOS Policies with DSCP

https://community.cisco.com/t5/routing/dscp-and-ip-precedence/td-p/1884944
There is not actually any need for remarking between the IP prec or TOS byte
The router just read different fields from the ToS byte in case of IPprec or DSCP.

The IP prec is just the first 3 high order bits of the TOS byte. The DSCP is the first 6 high order bits. You can read the next doc for detailed info

http://www.cisco.com/en/US/tech/tk543/tk757/technologies_tech_note09186a00800949f2.shtml

However, it is easier to have a common reference with your ISP regarding the QoS values to be eihter DSCP or IP Prec. Finally, pay attention if you use encryption over the ISP (e.g. IPsec or GRE.)

In this case, you have to use the relevant conf. to your edge router in order to have the correct QoS prioritization (qos pre classify command) over the ISP.

The default DSCP is 000 000. Class selector DSCPs are values that are backward compatible with IP precedence. When converting between IP precedence and DSCP, match the three most significant bits. In other words:

IP Prec 5 (101) maps to IP DSCP 101 000
ToS Byte

https://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-packet-marking/10103-dscpvalues.html

https://www.bytesolutions.com/dscp-tos-cos-presidence-conversion-chart/

Jitter output

Positive and Negative Jitter
If the conditions change (for instance, there is a sudden burst on the network and queues start to build on an interface) then this delay will increase. The transit time between two consecutive packets will not be the same anymore: the second packet will have to go through a longer queue, spending more time, and generating positive jitter.

Once this burst is over, the queue will progressively reduce, reversing the situation. Out of two consecutive packets, the second one will spend less time in the queues, and will therefore generate negative jitter.

The Good News and the Bad News
That being said, it is clear that positive jitter is an indication that the situation is actually getting worse, less favorable for the underlying application. Positive jitter is bad news, and indication that the delay is increasing and the network has some level of congestion. On the other side, negative jitter is a sign that the network is actually getting healthier, les busy, the queues are reduced, so in that respect, this is good news.

Now that we have this information in perspective, it is clear that the bad jitter is actually the positive while the good jitter is actually negative. But for some applications, any jitter big enough can cause harm.

With the following set of values, you will be on your way for a meaningful interpretation of your data:

Percentage of packets that had positive jitter: it estimates how many packets are actually introducing jitter. If a large ratio of the population introduces jitter it might not be a problem as long as the introduced jitter per packet remains low (see next metric).
Average jitter per packet that had positive jitter: it gives you an idea on how much jitter is introduced once a positive jitter is experienced. A big jitter increase per packet means a large latency dynamic in your network which is not good.
Percentage of packets that had negative jitter: it is an estimation of how many packets it takes to the network to compensate the jitter. If this is much higher than the percentage of packets that had positive jitter, it may be a sign that your network is having a hard time to absorb the traffic bursts. Generally speaking they should be within the same range.
Average jitter per packet that had negative jitter.
http://docwiki.cisco.com/wiki/IOS_IP_SLAs_UDP_Jitter_Operation_Technical_Analysis#Positive_and_Negative_Jitter

The jitter caused by traffic fluctuations tend to cancel out, for if a packet is late, and produces positive jitter, the next packet will be early in relation to it, and produce negative jitter. Figure 3 shows the effect of traffic fluctuation on the arrival times seen at the receiver, with the resulting positive and negative jitters. If a cumulative sum of jitters is maintained, it hovers around zero, because the positive and negative jitters tend to cancel out. If the cumulative sum does not cancel out that means that the packet transmission rate is above the maximum the channel can bear, and traffic shaping is happening at a router on the path, with the resulting queues. Therefore, by sending packets at regular intervals, and 799SBRC200119° Simpósio Brasileiro de Redes de ComputadoresFlorianópolis, Santa Catarina, 21 a 25 de maio de 2001SBRC200119° Simpósio Brasileiro de Redes de ComputadoresFlorianópolis, Santa Catarina, 21 a 25 de maio de 2001
tracking the interarrival times at the receiver, it is possible to measure changes in path bandwidth without using the packet pair method.
https://www.researchgate.net/publication/220327598_End-to-end_inverse_multiplexing_for_mobile_hosts

Convertion | DSCP to AF to IP precedence.

Convertdscp8218
Correction: IP Precedence of DSCP value 34 should be 4.

Convert DSCP value to Assured forwarding:
IF A=X and Y=F
Formula: X =(DSCP VAL) / 8
Y =(Remainder) / 2
SOLN:
18/8 = 2.25
2/2 = 1
A: AF21

34/8 = 4.25
2/2 = 1
A: AF41

Convert DSCP value to IP Precedence:
Method – 1. Convert DSCP to Binary

SOLN:(32 16 8 4 2 1)
18 – 0 1 0 0 1 0
IP Precedence
Zero assigned by IETF
A: 2

(32 16 8 4 2 1)
34 – 1 0 0 0 1 0
IP Precedence
Zero assigned by IETF
A: 4

https://www.bytesolutions.com/dscp-tos-cos-presidence-conversion-chart/

What is Microburst?

Microburt effect – Doubling the latency, Queue has been filled up and any packets arrived within that time period are getting drop. These microburst not causing high latency but causing packet loss. All this effect will be invisible to a traditional tool

It’s not something that is easily avoidable. It’s up to the endhosts to behave nicely and send smooth traffic instead of burst. Try to avoid mismatching link speeds, like going from Gigabit to 100 Mbit/s. Get a switch with enough buffers to handle microbursts.

You can’t avoid or prevent them as such without modifying the sending host’s application/network stack so it smoothes out the bursts. However, you can manage microbursts by tuning the size of receive buffers / rings to absorb occassional microburts.

———————————————-

Cause of congestion:
1. As packets arrive at a node, they are stored in an input buffer. If packets arrive too fast, an incoming packet may find that there is no available buffer space.
2. Even very large buffer cannot prevent congestion, because of delay, timeout and retransmissions.
3. Slow processors may lead to congestion.
4. low bandwidth lines may also lead to congestion.
———————————————-

QOS, COS & TOS

QoS = Quality of Service. This is a general term of classifying / prioritizing traffic in the network (for example, prioritize VoIP over FTP traffic)

ToS = Type of Service. This is a byte in the IPv4 header which is used for Precedence, or in other words categorizing traffic classes (eg. precedence 0 = routine traffic, 5 = critical). In the more modern form, the ToS is used for DSCP. This is one of the tools available for QoS implementation.

CoS = Class of Service. This is a field in the ethernet header, also for categorizing traffic (0-7), however this works at layer 2

https://www.linkedin.com/pulse/voice-video-switched-network-ii-qos-dany-hallak/
https://learningnetwork.cisco.com/thread/38069
https://www.cisco.com/c/en/us/support/docs/voice/voice-quality/23442-tos-cos.html
http://www.ciscopress.com/articles/article.asp?p=101170&seqNum=2

QOS Overview

Quality Of Service

Recommend Resources:
Qos-Enabled Networks: Tools and Foundations
End-to-End Qos Network Design

Online Resources:
BRKCRS-2501 – Campus QoS Design-Simplified
Enterprise Medianet Quality of Service Design 4.0

What is QoS?
• Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

• E.g. Different service levels for different types of “classes” of traffic flows

Why is QoS Needed?
Root Cause: Resource Contention
• Multiple flows sharing the same link
• Same or multiple application
• Each application has its own requirements

Contention results in Queuing
• Packets may be delayed or dropped
• Effective flow throughput decreases
• Delay or Jitter may exceed thresholds

Best Solution: Avoid Contention
• Don’t over-provision
• Not always possible

Next Best Solution: QoS
• Network congestion is controlled
• Delay/Loss/Jitter/Throughput are controlled
• Only alleviates temporary congestion

QoS Model
• QoS model defines contention management approach
• Two types
o Integrated Services
o Differentiated Services

What is IntServ? (Smaller Deployment)
• Connection-oriented model
• Every flow has an explicit reservation end-to-end
• Does not scale well because network must maintain too much state

IntServ use case in MPLS TE
• Use for real world deployment

What is DiffServ?
• Connectionless model
• Traffic is grouped into classes
• QoS behavioe is defined by traffic’s class
• Called Per-Hop Behavior (PHB)

Classification & Marking
• In order for DiffServ to properly work, traffic must be placed into correct classes (I.e. “Classification”
• Traffic classification normally occurs at network ingress edge (Typically a manual process we must enforce)

Classification Types
• Classification & Marking can happen at multiple places
• Later 2 class of service
o 802.1q Ethernet header
• Layer 3 IP Type of Service (ToS)
o IP precedence & Differentiated Services code point (DSCP)
• Layer 4
o TCP & UDP ports
• Upper layers
o Network Based Application Recognition (NBAR)
o Deep packet Inspection (DPI)

QoS Tools
• Used to implement QoS Models
o Many tools rely on correct QoS classification & marking.
• Different tools for
o Network Edge
o Network Core

Tools Fall into three main categories

Admission Control
o Used to enforce traffic marking or traffic rate
Two Main Types:
o Traffic Policing – used to limit inbound and outbound traffic flows
o Traffic that exceeds the rate can be dropped, marked, or re-marked
o Typically applied on ingress edge
Example use case
• PE connects to CE w/ GigE port
• Circuit is provisioned at 250Mbps
• PE applies inbound policer at port level
o If traffic 250Mbps, Drop or Maked or Re-marked
o Traffic Shaping – Used to normalize outbound traffic flows.
o Smooth out traffic bursts
o Prepares traffic for ingress policing
o Delay and Queue exceeding traffic
Example use case
• PE connects to CE w/ GigE port
• Circuit is provisioned at 250Mbps
• PE applies inbound policer at port level
o If traffic 250Mbps, queue for later transmission

Congestion Management Techniques (I.e. Queueing)
• Outbound Congestion management

Queueing Types:
• First in First Out (FIFO)
• Weighted Fair Queueing (WFQ)
• Priority Queueing (PQ) / Low Latency Queuing (LLQ)

Example use case
• CE to PE link is experiencing packet loss
• Apply LLQ to give VoIP low delay
• Apply WFQ to guarantee 50% BW for SQL
• All other traffic gets best effort FIFO

Congestion Avoidance Techniques (Try to prevent congestion before it occurs I.e. Packet drop strategy)
Drop Strategy Types:
• Weight Random Early Detection (WRED)
• Tail drop

Example use case
• CE to PE link is experiencing packet loss
• Apply WRED to selectively drop low priority TCP flows
• Senders go into TCP slow start
• Congestion management is offloaded to the end host

Question and Answer:
1. Policing Drops and Shaping queue.