Quality of service
Quality of service (QoS) is a network's ability to support varying levels of network performance that can then be mapped to the needs of the applications supported by that network. The performance parameters we seek to control include such things as delay across the network, variations in delay, and total bandwidth available for a connection or information flow, to name a few.
The term QoS is most commonly used in reference to packet networks. Circuit networks have very little problem with bandwidth, delay, and delay variation. In fact, assuming a circuit has been selected with the appropriate bandwidth to begin with, it is the very nature of a circuit network to maintain a consistent bandwidth, have little variation in delay, and the least possible total delay. They are essentially “wire speed.” One might say that the circuit network, in terms of QoS, is the gold standard against which packet networks can be measured.
It is reasonable to ask why we want to work to put QoS capabilities in our packet networks if already have a gold standard in circuit networks. The reason is fairly straightforward: sometimes we do not want, or are unwilling to pay for, the gold standard. Circuit networks will always provide the best QoS, but packet networks have the potential to provide varying levels of QoS to match the needs of different applications.
Quality of Service versus Class of Service
Class of service (CoS) defines service classes for traffic. The U.S. postal service implements a class of service capability; you can purchase priority overnight, 2nd day, first class, or bulk rate delivery. A network administrator might place email and file transfer in one class of service and video in another. Each CoS is assigned its own required service characteristics. In this example, the video traffic might be placed in a queue that receives priority treatment compared with the queue for email traffic.
In communications networks, there are many mechanisms by which a given packet or frame can be marked with its CoS membership. Some of them include Diffserv, 802.1p, and the ATM VPI/VCI (because each ATM virtual circuit can support a particular class of traffic). In frame relay the Discard Eligible (DE) bit can be thought of as providing a CoS capability.
QoS, on the other hand, generally refers to some sort of end-to-end guarantee provided to a given class of network traffic. Ensuring the statement “I will give you 3 Mbps of bandwidth with a delay of less than 25 ms and no more than 5 ms of variation in that delay for this traffic class” requires all devices along the path to communicate their ability to meet that guarantee. The only way to accomplish this is through some sort of signaling protocol, such as RSVP. Once the guarantees are in place, they are implemented via the queuing technologies on each network device. At some point, the network will not be able to meet all requests. To protect other requests, some sort of admission control must be performed.
It is safe to discuss QoS as convenient shorthand for “treat different traffic differently,” but to implement this shorthand on a network requires either per-device CoS configuration or some sort of end-to-end signaled resource reservation, or both. QoS and CoS are not mutually exclusive technologies.
Four QoS Concerns
Quality of service generally works on four well-defined network factors that affect the end-to-end quality of transmitted data. If the path a packet takes from point A to point B over the network can be thought of as a series of roads between two cities, then QoS tries to manage four elements of that trip.
- Bandwidth: How much traffic can the road handle? How many lanes does the road have? What is the speed limit?
- Delay: What is the total length of the path? How long does it take to drive from one city to the other? No matter how close the cities, or how high the speed limit, there is always going to be some minimum amount of delay. The delay is going to be impacted by speed limits, how long it takes to get on the highway, time spent at stop lights and stop signs, time spent in traffic backups on the road, and the actual length of the highways themselves. Luckily, in a packet network, there is no need to stop off at rest areas to pick up coffee or visit the restroom!
- Jitter: If two cars set out to make the trip, and the first departs 30 minutes before the second, will they arrive 30 minutes apart as well? If they do not, then some jitter has occurred. Some people are content if the trip between city A and city B is relatively consistent no matter the delay, but find it frustrating if their commute varies by an hour or two each day. Some applications have similar concerns. If packets arrive spaced as they were sent, they function well; if the packets arrive spaced differently than they were sent, the application fails partially or completely.
- Packet loss: What is the percentage of lost packets on the road? Most networks have some packet loss, but how much loss is acceptable can differ from application to application.
Applications and users also expect the network to have appropriate levels of reliability (e.g., survivability, mean time to repair) and security (e.g., confidentiality, integrity, and availability of information). These latter two issues are dealt with under the larger umbrella of information security.
Addressing QoS Issues
The quality of service (QoS) issues of bandwidth, delay, jitter, and packet loss are addressed in multiple ways.
- Network Design: Long before a single packet traverses any network, the network must be able to support the desired QoS parameters required by the applications to be supported. Such a network design must include the topology of the network, the bandwidth of the links, and the configuration of the protocols to be used in the network. Failure scenarios should also be considered to ensure the network continues to provide the required QoS even when a network outage occurs. Network design is critical to all four QoS issues.
- Queue Management: As traffic arrives at any particular switching node within the network, it can be of various types requiring varying levels of QoS. When multiple such packets are scheduled to exit the same interface, some method must be established to determine the packet transmit order. This decision will greatly impact end-to-end delay and jitter. If congestion begins to manifest, some criteria must be used to decide what to discard. This will greatly impact packet loss. Sophisticated queue management strategies provide this level of control. Each network device that is supporting multiple QoS levels (e.g., router or switch) must implement this form of queue management.
- QoS Signaling: When a network application needs end-to-end assurance, there must be some way to inform each device along the path of the required QoS. A signaling protocol is used between routers and switches to transmit bandwidth, delay, jitter, and packet loss tolerance requirements between devices, as well as to reserve appropriate resources.
- CoS Signaling: No network is designed to provide individualized QoS to each and every packet. Instead, the network is designed to provide a small set of QoS capabilities, and a series of classes is defined for each set. It is then important to tag each packet entering the network with the appropriate service class designation so the network will know how to treat that particular packet with respect to all four QoS issues.
- Admission control: No matter how well designed the network, if the network receives more traffic than it was designed to carry as a whole, or for a specific application, QoS could be compromised for the network as a whole, or for the application to which the commitment was made. To prevent this, the edges of the network must police admission and control this traffic flow. Knowing that the network is going to police itself, it then makes sense for end-users to shape the flow of their traffic to conform to the commitment from the network.
A QoS Analogy
No analogy is perfect, but perhaps a simple one might help to relate the QoS strategies used in networks to something a bit more everyday: your home and your quality of life.
- Network design: The starting place of achieving any quality of life (QoL) in your home is to pick or build the right one. Homes come in a variety of sizes and shapes. To achieve your quality of life, you first must clearly define what that is, and then pick or build a house that can serve the need. If you are a family with two young children and you enjoy entertaining guests, a one-bedroom house with a single bath and small kitchen will not do at all. You need at least four bedrooms (one for guests), a couple of bathrooms, space to entertain, and so forth. Without the right house, your QoL does not have a prayer of being realized.
- Admission control: No matter how well designed the home, if you let too many people in, your QoL will suffer. If you designed for a family of four and two house-guests, and your son brings home the football team for the weekend, life is not going to be of a high quality for that weekend. Strict admission control procedures are needed, and they must be communicated and policed. If the football team wants to experience your QoL and they are smart, they will adjust their flow to two players per weekend and, over time, they will all get a taste of good QoL without being turned away at the door.
- CoS signaling: In order for the different members of the household to receive an appropriate QoL for their station in life, it is important to be able to tell the difference between them. Luckily, in our home analogy, this is the easiest one to deal with. The children are visibly different from the parents, and the parents can tell the difference between the children and Great Aunt Martha who is visiting for the weekend. If this were a major hotel instead of a home, however, some form of tagging would be needed. Hotels often use room keys to distinguish their guests.
- Queue management: Even with the right house, you need to effectively manage queues. A house seldom has enough resources for each person to have 100 percent of the resources at their disposal 100 percent of the time. Putting 4 bathrooms into the house so that 4 people can each have one is excessive and seldom done. Instead we have to manage the queues at the two bathrooms we do have. We need to specify that the bathroom upstairs is for Mom and Dad and the one downstairs is for the kids. We need to specify that the kids have to share theirs with guests and Mom and Dad, but Mom and Dad do not have to share theirs with the kids. That way Mom and Dad get the QoL they want.
- QoS signaling: Clearly, some internal signaling will be needed to indicate the need to reserve or shift resources. When Great Aunt Martha asks to stop by for a weekend and brings two of her friends, the children are going to get a QoL signal from Mom and Dad: clean up the rooms and relocate to the couch! We are shifting resources for new traffic!
Business Incentives for QoS
Modern networks support the notion that bandwidth is not enough to provide quality of service on an IP network. Increase the bandwidth, and the users will discover the capacity and make use of it. Consider Parkinson’s Law, which says the need for a resource always scales to its availability. While increasing bandwidth can provide a simulated QoS in the short term, it cannot provide a long-term solution. Today, corporations and carriers are realizing that designing and engineering QoS into their networks makes good business sense.
QoS primarily gives network administrators the ability to control the traffic on their network. This leads to other benefits. QoS allows the prioritization of traffic on a network. Mission-critical applications will not suffer performance degradation in the presence of other more delay-tolerant applications, such as email or employee Web browsing.
QoS enables a network to scale better as well. Instead of building out a new network infrastructure when a link becomes congested or experiences high utilization rates, QoS allows the important traffic to traverse the link first and ensures that network devices, such as routers and switches, have the resources to forward the traffic, while discarding or delaying less-sensitive packets. QoS can also allow traffic to be engineered to less-congested paths, making maximum use of the current network investment.
Applying QoS to lower-speed WAN links is an outcome of the ability to scale. Since WAN links are more expensive and generally lower speed than the LAN they are attached to, they experience congestion more often, and upgrading the access rate is an expensive option. QoS techniques allow important company information to travel over the WAN link with a higher priority than other company traffic. This, in turn, allows the existing link to meet the company needs without having to upgrade.
Finally, both service providers and corporate customers realize that QoS allows the provider to differentiate services to customers. Traditionally, once a packet from any company was inserted into the backbone, it encountered the same delay, jitter, and discard rate as any other packet. By employing QoS techniques, an ISP can offer premium “business class” Internets for companies willing to pay more for performance. This is a real opportunity for providers to differentiate themselves in a commodity market.
The Justification for QoS
The visual depicts a business scenario that justifies the investment in QoS as applied to an ISP network. It illustrates that QoS is a more economical solution to network design than is simply throwing bandwidth at the problem. Essentially, QoS allows the ISP to sell the same bandwidth many times, increasing its profits while minimizing its investments in additional network resources. By employing QoS, an ISP can allocate 50 percent of its network bandwidth to traditional best-effort traffic. By selling that same 50 percent of bandwidth 15 times (or oversubscribing the bandwidth by 1500 percent), the ISP can charge each customer only 50 percent of what it normally would to recoup its investment. This represents a revenue gain 7.5 times more than would normally be recovered for the same bandwidth.
The real gain, however, is realized when portions of the same network infrastructure can be sold for higher rates and the service is still guaranteed due to QoS. If voice traffic requires average delay rates equivalent to 10 percent utilization on the network, either an entire, lightly loaded network, can be provisioned for voice traffic, or 10 percent of the existing network can be reserved for voice. An ISP using the second strategy can charge a premium cost for traffic with a guaranteed delay, while transporting it on the same infrastructure as the best-effort traffic.
Contrast this with a network that needs to support the same types of traffic with no QoS. It would have to be built to satisfy the most demanding of traffic types: voice. This would mean that an ISP would have to build and pay for 100 percent of a network infrastructure and only be able to charge for 10 percent of it—or charge 10 times more than its competitors. Neither option is practical in a competitive industry.
Justification and Delay
The visual provides a delay-based perspective on why implementing QoS in a packet network makes sense. Clearly, building a single network for all communication needs is less expensive than building multiple networks, one for each need. This is simple math. And building this network as a packet network means dynamic use of the network bandwidth, something a circuit network has difficulty achieving. So, the modern converged network should be a packet network.
Without QoS, however, there is no mechanism for differentiating traffic and ensuring preferential treatment. Our old nemesis, the delay curve, becomes an issue. As the network increases its load, all traffic is treated equally, and that means all traffic experiences delay. Without QoS, the only way to ensure that voice gets the delay characteristic it needs is to ensure that all traffic gets the delay characteristic of voice. If voice requires a delay characteristic that the delay curve tells us is only achieved if network load stays below 20 percent, then the entire network must be built with five times the capacity it needs.
If the network is QoS-capable, however, traffic can be prioritized. Thus we are effectively pushing the voice traffic down into the lower part of the delay curve, priority data (or the next class level) to the middle part of the curve, and best effort data to the upper end of the curve. When there is no higher priority traffic, the lower priority traffic sees enhanced service, but the higher priority traffic never sees the lower grade service. As a consequence, we can more closely size the network to its bandwidth requirements. This, in turn, translates to a cost savings.
- Class of service (CoS)
http://www.sevana.fi/voice_quality_testing_measurement_analysis.php | QoS monitoring in VoIP