ATM cell

From Hill2dot0
Jump to: navigation, search

ATM networking, the switching and multiplexing infrastructure for delivering B-ISDN services, is based on small, fixed-length packets of information known as cells. The term “packet” is not particularly useful in this context since “packet-originated” is used primarily in the context of data networking in general, and the familiar OSI RM in particular. In the OSI RM, a packet is a Layer 3 (i.e., Network Layer) PDU. ATM layers do not correspond to OSI layers, and the cell contents are not strictly data in the sense of computer file content, but are also digital voice samples, digitized video, and almost anything else on a network.

Why are cells used in the first place?

The answer comes from realizing that the end-to-end network delay a user experiences between clicking a mouse button and something happening is the result of many interrelated factors. The use of cells minimizes all three factors listed below and makes ATM useful in several network situations.

Network nodal processing delay

“Network node” is the general term for an intermediate system that relays bits from one user to another. Usually there is a series of network nodes, be they voice switches or Internet routers, between source and destination. Each must shuttle traffic from input port to proper output port. This does not happen instantaneously, but is the product of load on the network node at that point in time, efficiency of the port decision process, the depth of the queues in the node, and so on.

Packetizing delay

The term “packetizing delay” is unfortunate since there could be frames (i.e., OSI Layer 2 PDUs), ATM cells, or something else involved. The term “accumulation delay” is sometimes used instead. The point is that no information leaves a source or is sent into a network until some critical number of bits is accumulated. The tradeoff is between minimizing overhead processing and maximizing bandwidth utilization. Typically, packet size varies to allow implementors to make their own choices in this regard.

Serial delay

The term “serial delay” means that in almost all networks, bit transmission is serial, one bit at a time (as opposed to parallel, as with a PC’s printer). But this delay, coupled with highly variable length data units in most protocols, makes network delays variable and unpredictable.

The larger the data unit, the larger the packetizing and serial delays become in proportion to propagation delays and nodal processing delays. If smaller data units are used along with larger ones, the delays are not generally smaller, just more variable. In other words, variable-length data units result in variable delays, often called jitter. If only smaller units are used, the variability is eliminated and the packetizing and the serial delays are reduced. This is the best of all worlds: the cell environment.

Cell Structure and Size

Cell Structure

The visual shows the format of an ATM cell. The five-octet header contains connection information necessary for switching cells. This information is consistent across all cell types, regardless of the contents of the Payload field. Every network node examines the header and will change its contents as necessary.

The 48-octet payload contains information specific to a higher layer service. Usually, ATM network nodes do not examine or modify payload contents—payload is interpreted only on an end-to-end basis. The format of the information carried in the payload varies by type of service.

The cell size of 53 octets seems odd (pun intended; it’s also prime!) and, in some quarters, remains contentious. In the initial B-ISDN description in the CCITT’s 1988 Blue Book, the cell size had not been finalized. Early descriptions stated that the header would be between 3 and 8 octets, and the payload between 32 and 120 octets. By 1990, the values of 5 and 48 octets, respectively, had been adopted.

If ATM is to provide the switching infrastructure for next-generation telecommunications networks, it will have to strike a balance between the requirements of different types of services that must be supported. Furthermore, while today’s digital carriers essentially switch one octet at a time, ATM networks will operate on cell granularity. What then, is the optimal cell size for both delay-sensitive (e.g., voice) and protocol-intensive (e.g., data) traffic? A five-octet header was adopted because it was sufficient for the necessary address and control information, and it was a compromise between the overhead of a small header (three octets) and a large header (eight octets).

The cell’s payload size could directly affect the services that are provided. The voice quality and other delay-sensitive services are judged by the network delay, while data and most other delay-insensitive services are judged by throughput and line efficiency. From a data perspective, a larger payload results in better network efficiency, fewer cells per message, and higher throughput. From a voice perspective, small cells make it easier to meet low end-to-end delay requirements, build echo cancellers, and minimize the necessary number of buffers.

ATM Cell Header (UNI)

ATM Cell Header (UNI)

The visual shows the format of an ATM cell header in cells sent across the UNI. Formatting the header is one of the responsibilities of the ATM Layer. The fields of the header are described below.

  • Generic flow control (GFC): Used for network-to-user flow control procedures. This field is 4 bits long in a UNI cell and is absent in an NNI cell.
  • Virtual path identifier (VPI): Identifies the VP for cell routing. The VPI is 8 bits long in a UNI cell and 12 bits long in an NNI cell (the VPI field expands to fill the GFC bit positions in an NNI cell).
  • Virtual channel identifier (VCI): Identifies the VC for cell routing. The VCI is 16 bits long.
  • Payload type identifier (PTI): Identifies the type of cell, congestion status, and other information. The PTI is 3 bits long.
  • Cell loss priority (CLP): Identifies the cell as high or low priority. The CLP is a single bit.
  • Header error control (HEC): Used for bit-error detection, single-bit error correction (optional), and cell delineation. The HEC is 8 bits long.

On the public NNI and private NNI (PNNI), the cell header format is slightly different. NNIs have no GFC field, and the VPI/VCI fields have slightly different sizes and formats.

Latency vs. Overhead

Cell Size: Latency vs. Overhead

The visual shows the latency and the protocol overhead for a variety of payload sizes. The primary driver of the payload size is latency for voice traffic. A single PCM sample is collected every 125 ms. The number of octets, or PCM samples, in the payload directly affects the amount of delay (latency) required to assemble a cell. Several network administrations (notably in Europe) wanted a 32-octet payload (4 ms latency) to avoid having to deploy a large number of echo cancellers. Other administrations (notably the U.S.) wanted a 64-octet payload (8 ms latency), partially because the network could accommodate the longer latency and partially because this is the size of the smallest allowable Ethernet frame. A 48-octet payload (6 ms latency) was chosen as the compromise, which ended up pleasing no one.

The end-to-end delay is actually a very important issue in the design of a telephone network. If the end-to-end, round-trip delay in the network is greater than 600 ms, human conversation becomes difficult. Copper and optical fiber media propagate signals at a rate of about 8 ms/mile (5 ms/km). A network with an end-to-end distance of 2800 miles (4500 km) will induce a round-trip delay of 45 ms from propagation delay alone. In reality, the correct measure is not only echo, but also other factors, including attenuation, use of analog versus digital facilities, use of two-wire or four-wire facilities, and the number of echo suppressors and/or cancellers. See ITU-T Rec. G. 131, for more information.

Although a five-octet header represents an overhead of more than 9 percent in each 53-octet cell, some consider this of minimal consequence in a high-speed network.

The “Cell Tax”

The “Cell Tax”

Critics of ATM, and sometimes even supporters, often refer to the ATM “cell tax” as one reason ATM has been slow to become a popular transport method for IP packets. Just what is the dreaded “cell tax”?

The visual shows a 1500 byte (octet) IP packet. This is a fairly typical IP MTU size because many IP hosts are on Ethernet LANs and a 1500 byte packet is the maximum IP packet size that will fit in a single Ethernet frame without fragmentation. (Fragmentation is processor overhead that is always avoided.) When the IP packet is sent over the Ethernet LAN, all that is added is a 14-byte Ethernet header, 12 bytes of which is the MAC layer address, and a 4-byte trailer. This total of 18 additional bytes per 1500 data bytes is a modest 1.2 percent overhead.

When the IP packet is sent using PPP over a link between routers, the overhead is even less. Now the IP packet requires only a 4-byte header and 2-byte trailer. This total of 6 bytes per 1500 data bytes is an even more modest 0.4 percent overhead.

Look what happens when an IP packet is sent over an ATM network. First, a special RFC 1577 header (or RFC 1483 header of the same size) is placed in front of the IP packet to form a frame. Then the AAL must produce 32 cells (1500/48 = 31.417), each with a 5-byte ATM header. This is 5 x 32 = 160 bytes of additional overhead. There are even 8 more bytes for the AAL5 trailer, but this can go in the last ATM cell payload. This total of 168 bytes is the “cell tax” of 11.2 percent on each an every IP packet sent.

Stated another way, Would you like to pay a withholding tax of 0.4 percent, 1.2 percent, or 11.2 percent on your earnings this year?

“Packetizing” and Serial Delay

Packetizing and Serial Delay

The visual shows how the use of ATM cells (i.e., small, fixed-length PDUs) minimizes the effects of “packetizing” and serial delays in a network. The packetizing delay could involve frames or other PDUs as well.

Consider a switch or multiplexer used as a network node. The input port and output ports might handle variable-length PDUs, such as packets (OSI Layer 3 PDUs) or frames (OSI Layer 2 PDUs). The packets or frames have an established minimum and maximum size. The packetizing process in the visual gathers information from a delay-sensitive application such as voice. The 8-bit voice samples must accumulate until a “frame’s worth” of samples have been stored in a buffer. The larger the frame, the longer the wait. Obviously, buffering adds delay to the end-to-end network delay experienced by the application and the user. However, smaller frames would add overhead and traffic loads to the network nodes and might be the worst possible thing to do for data.

Beyond the packetizing wait, there is serial delay. In this case, the frame full of delay-sensitive information might wait behind a large, delay-tolerant data frame. Since transmission is serial, this wait happens even if only the first bit of the data frame has been sent. There is no standard way to pull back a frame once serial transmission has started. This means that giving priority to the delay-sensitive traffic will not help much. The variable delays introduced by packetizing and serial delays occur in all technologies based on variable length PDUs, from X.25 to frame relay to IP routers on the Internet. This is the most serious problem facing those who use any of these networks for mixed delay-sensitive (e.g., voice and video) and delay-insensitive (e.g., most data) traffic.

The lower part of the visual shows an input port and output port using cells. The smaller payload unit has a smaller packetizing delay. Delay would not be variable for frames or packets because the length of the Information field is preset. The same is true for the serial delay. Even if a voice cell with priority just misses a slot given to a data cell, the cell is small enough to keep jitter to a minimum. Presumably, the voice cell will be the next one sent.

Although not emphasized in the visual, there is a real benefit to switch and multiplexer design when using ATM. There is no longer any question about how big buffer areas should be—they are all organized into some number of cells. Moving cells around within the switch or multiplexer is easier also, as is coding, memory management, housekeeping, and so on. All these are benefits of using ATM cells as the basic network information unit. It is easier to build a house out of uniform bricks than trying to construct a home with widely varying lengths of lumber and siding.

Nodal Processing Delays

All networks consist of a series of network nodes connecting users to the network and each other. A local exchange switch for voice, frame relay switch for data, and an IP router for the Internet are all examples of network nodes. Before ATM, network nodes tended to be specialized devices built for one type of network: voice, video, data, and so forth. But whatever they are called or how they are used, all network nodes are subject to variable nodal processing delays, depending on the amount of traffic and the speed and capacity of the network node. In other words, the network node’s delay characteristics depend on its efficiency.

A good measure of a network node’s efficiency is how much work needs to be done to route an information unit from an input port to an output port. It does not matter if the unit is a packet, frame, cell, or voice sample. The more efficient the node, the lower and more stable the nodal processing delay.

Suppose a network node was designed and built to do all its input-to-output routing entirely in hardware. That is, all processing would be done at the silicon chipset level and no (or very little) general-purpose software would need to be run to control the node. Typically, this port-to-port connectivity is handled through a fabric at the hardware level, although the term “fabric” is sometimes used generically. In any case, the specialized hardware requires minimal configuration and overhead to set up and run the network node. Usually, such hardware-oriented network nodes are known as switches.

Software is available to perform these functions as well. In this case, the data unit travels from input port to network node control software running as a “node operating system.” This software module makes the output port routing decision based on topology information, priorities, traffic loads, and so on. These complex routing rules give the software-oriented network node more flexibility when compared with the more rigid and tightly programmed chipsets in hardware-oriented network nodes. Usually, such software-oriented network nodes are known as routers.

In the real world, we cannot do things entirely in hardware or software. All network nodes must use a combination of hardware and software to accomplish their tasks. The question is, How much emphasis is on hardware or software functions? How are the hardware and software mixed to create the network node?

The question today regarding ATM network nodes and the type of network nodes found on the Internet is: Is the network node a switch or a router?

A Cell Interface

A Cell Interface

An advantage of using cells for transmission is predictability. For a fixed transmission speed, the total number of cells per second can be determined. Once the total number of cells is determined, requirements for service can be calculated and assigned to users.

The visual provides the cells per second for ATM access lines using direct mapping (no additional Physical Layer overhead) at the various rates. These figures are calculated as follows.

  • DS-1 access line =
1.544 Mbps – 8 kbps overhead =
1.536 Mbps / 8 bits per octet / 53 octets per cell =
3622 cells per second
  • DS-3 access line =
44,736 Mbps – 526,306 kbps overhead =
44.209 Mbps / 8 bits per octet / 53 octets per cell =
104,266 cells per second
  • OC-3 access line =
155.52 Mbps – 5.76 Mbps overhead =
149.76 Mbps / 8 bits per octet / 53 octets per cell =
353,207 cells per second

The bottom of the visual depicts what the cells would look like if a data camera were used to shoot a length of wire. The cells on the wire would be packed back to back in a continual progression. The ownership of the cells would be determined by the end devices. On an access link, some cells carry data, some carry voice, and some are idle. The cell rate is constant regardless of the information rate.

Total Number of Cells

Because of the fixed size of cells and the fixed data rates of network components, it is possible to calculate the total number of cells in a network. The point is not the total number of cells, but the fact that the number is fixed. Given a known value of resources (cells) for transport, a service provider can now offer a service guarantee. This guarantee can be backed up by traffic engineering and enforced by traffic management.

The concept is very similar to that used on the voice network today. Given a finite number of resources (voice paths), you can calculate the probability of not being able to place a call through the network. Those users who cannot place a call receive busy tone.

In a cell network, users attempting to establish a connection that cannot handle the additional load receive the cell-equivalent of a “busy tone.” In other packet-based networks, a connection is accepted by the network, and if resources are limited, the traffic is delayed until the resources are free. Since cell networks offer “busy tone,” existing users are not affected by limited resources; only those attempting new connections are affected.