LAN switch

From Hill2dot0
Jump to: navigation, search
LAN Switch

A LAN switch cannot simply begin copying bits indiscriminately from one port to another. The LAN switch is a Data Link Layer device; it is a full participant in the media access control (MAC) scheme being used on each port. Each port is its own collision domain.

Any LAN adapter wishing to transmit a frame on to an Ethernet contends for the medium, gains control, and transmits a frame. These rules apply to the ports of a LAN switch as well. However, the way frames are received by the ports in a LAN switch differs from how they are received by other LAN adapters. Most LAN adapters attached to the LAN selectively listen to frames, copying only those frames addressed to that LAN adapter. The port on the LAN switch, however, is a little less discriminating. It takes a copy of every frame it sees. The port on a LAN switch is said to operate in promiscuous mode. Because LAN switches are Data Link Layer devices, they know nothing about the contents of frames they filter or forward (i.e., the information carried in the Data field). Therefore, a LAN switch knows nothing about packets or messages. All it knows is that one LAN adapter has something to send to another LAN adapter, and the LAN switch does its best to send the frame on its way.

Unlike the hub, these devices do not indiscriminately repeat all received bits onto all ports. Rather, they can read the Layer 2 frame and make intelligent choices about where and when to forward frames. The LAN switch was an evolutionary leap from its slower cousin, the bridge, which provided the same basic function. The LAN switch, however, was faster and provided for significantly higher port densities. It also enables a host of capabilities its predecessor could not have incorporated. The LAN switch has been so successful, that by 2005, it has almost completely displaced the hub in the market. There is simply no market for such a simple device as a hub when a switch costs pennies more per port and provides so much more functionality.

LAN Switching Modes

One of the greatest claims of LAN switches is their speed. Many of these products achieve latencies that are near wire-speed. In other words, the time between when the first bit arrives at the LAN switch, and when the first bit is forwarded onto the destination link, is only slightly more than the time it would take for that same bit to travel down a relatively short length of cable.

The reason is that LAN switches typically support a cut-through feature. Older conventional bridges were store and-forward devices. In a store-and-forward device, the entire frame must be received and buffered before it can be processed and forwarded. In an Ethernet world, even if there is no other traffic in the network, store-and-forward means that a 1500 octet frame will experience at least 1.2 milliseconds (the time it takes to transmit or receive a 1500 octet frame at 10 Mbps) of delay passing through a switching point. On a wire, a bit could travel 363 kilometers in that time!

A LAN switch implementing a cut-through feature need only delay the frame long enough to read the destination address. Because the destination address is always the first six octets of the frame, this information is the first to arrive. In a cut-through 10 Mbps Ethernet LAN switch, a frame would be delayed a mere .0048 milliseconds (4.8 microseconds). A bit can only travel approximately 1460 meters in that time.

However, cut-through comes at a cost. Conventional bridges, once receiving the entire frame, were able to process the frame check sequence (FCS) at the end of the frame and detect bit errors. If a frame contains bit errors, it is a waste of bandwidth to pass on the frame, so the bridge would discard it. A LAN switch with a cut though feature has already started forwarding the frame when the FCS arrives, and cannot choose to discard the frame. In the early days of LAN switching, this led to a great debate over whether LAN switches should be cut-through or store-and forward. The former camp pointed to the incredibly low latencies with minimal wasted bandwidth in the event of errors. The latter camp noted the devastation a malfunctioning LAN adapter could wreak on a network.

As with most things related to networks, the result was a new class of products that could do both. Normally, these devices operate as cut-through switches. However, each frame is error checked as it is forwarded. The frequency with which errored frames are detected is recorded. If it passes beyond a configurable threshold, the LAN switch reverts to store-and-forward mode until the error rate drops below the threshold. Most of these devices are able to make this change on a per port basis.

Any LAN switch, even when operating in cut-through mode, is forced to operate in store-and-forward mode if the port to which the frame is to be forwarded is currently in use.

Three Physical Forms of Ethernet Switches

Standalone or dedicated

These switches come as fixed configuration devices and tend to be relatively small (2–24 ports). They have their own power supply, usually implement 10/100 Mbps ports, and can have one higher speed port that can be used as an inter-switch link (or uplink). They are intended for use in small offices, branch offices, homes, and small workgroup environments.

Chassis or modular

These switches emerged as networks began to scale. It is inefficient to buy several standalone switches and tie them together with uplink ports to a central switch. The network sees these as multiple devices to be managed. Ports can fail. Cables can short. A modular or chassis switch is essential for a shelf or chassis that can take multiple cards or blades. In most cases, each blade is itself a small switch capable of switching between its own ports. The blades are interconnected across a high-speed backplane that is used to pass frames from blade to blade. These switches are usually managed platforms, support multiple different port rates and media, can be configured with many different combination of cards, support redundant power supplies and redundant core processors, and can scale to hundreds of ports. In general, blades with lower-speed ports (e.g., 10/100 Mbps) will support 16 to 96 ports. For blades with higher-speed ports, the port count decreases (e.g., GigE blades typically have 2–16 ports, and 10GigE blades typically have 1–2 ports).

Stackable

These switches split the difference between standalones and chassis-based systems. There is no need to buy a large (and potentially expensive) chassis. After purchasing the base unit that typically houses the power supply, management function, and a small number of ports, additional modules can be purchased and “stacked.” They are typically interconnected via an external high-speed backplane implemented with cables or specialized connectors. Today, most of these have most of the functions found in chassis-based systems. They are ideal for small networks that are likely to grow over time.

Congestion in LAN Switches

Congestion in LAN Switches

LAN switches are particularly susceptible to congestion due to the nature of the switch and the manner in which many LAN environments operate. Whether operating in cut-through or store-and-forward mode, LAN switches are high-speed devices with extremely low latencies. As such, congestion conditions arise far more quickly. This is especially true if multiple end stations are accessing the same target device, a common situation in many networks.

Consider a 24 port LAN switch with 22 workstations, 2 servers, and all ports operating at the same speed. If 6 users upload files to one of the servers simultaneously, the switch receives 6 traffic streams destined for the same port (i.e., the server's port). Assuming all ports are operating at the same transmission rate and the frames are approximately the same size, only 1 of the 6 frames is forwarded immediately; the other 5 are buffered.

If the network protocols support windowing for flow control (e.g., the Transmission Control Protocol), the senders continue to transmit frames, even though the first frame has not been acknowledged. The second set of frames are all buffered because the switch is still forwarding the first round of frames. If this continues for too long, the switch becomes congested.

There are several approaches to solving the congestion problem. One is to implement a switch with different port transmission rates. For example, an Ethernet switch could be equipped with two GigE ports and 22 10/100 Ethernet ports. Servers attached to the GigE ports are less likely to congest. This architecture is commonly found for server and inter-switch links. It is not a panacea, however; it opens the door to congestion in the opposite direction (i.e., when the system with the high-speed attachment begins to stream frames to a system with a slower link).

The simplest way to manage congestion is to discard excess traffic. The danger to this is that it will cause end systems to retransmit even more frames, and the congestion will be worsened. Some switches can use sophisticated discard management strategies, including predictive algorithms, to selectively discard frames that will minimize retransmissions and the impact on end applications.

Another approach is to increase buffer capacity in the LAN switch or manage buffer capacity with a sophisticated algorithm. Congestion tends to be a momentary thing. If the switch has sufficiently large buffers, it can survive periods of congestion without dropping frames. Flexible allocation of memory to ports allows congested ports to get more buffer space when they need it. Larger buffers also has a down side: the more the switch backlogs in a buffer, the greater the delay each frame experiences. Add too much delay and the upper layers can fail.

For half duplex ports, a third alternative is available—back pressure. Back pressure is nothing more than a simulated collision. If the switch is congested, it generates a jamming signal when a new transmission is received. This fools the transmitting LAN adapter into believing that a collision has occurred. This LAN adapter stops, waits, and tries again a few milliseconds later. This gives the switch time to forward some of the frames out of the congested buffer.

Back pressure does not work on a full duplex link, as there are no collisions. IEEE has instituted a pause capability that can be negotiated as part of the auto negotiator. A receiver can send a pause message (part of the MAC scheme), which tells the transmitter to stop transmissions temporarily (i.e., flow control).

LAN Switch Applications

A Premises-Based Scenario

A Premises-Based Scenario

The visual depicts a common scenario incorporating a LAN switch. A departmental network is supporting one server, one router, and multiple clients. The router is the path to the Internet and to the corporate intranet. Initially deployed using hubs arranged hierarchically, the network is overburdened and in need of a performance solution.

The solution opted for in this example is to replace the top-level hub with a LAN switch. Thus, a single collision domain becomes multiple collision domains, and the backbone of this small internetwork becomes the LAN switch. Furthermore, the server and the router are given their own dedicated ports and are operating in full-duplex mode to enhance performance. We can see the performance results possible with this configuration by examining some hypothetical traffic statistics.

Let us assume that the network depicted on the visual supports a total of 90 clients implementing both peer-to-peer (i.e., Microsoft Network) and server-based (i.e., network client software) transmissions. We further assume symmetric traffic generation (i.e., for every transmission in one direction, there is one of equal size in the opposite direction). This assumption is typically false for a specific query/ response pair, but can be true as an overall network average. Finally, we assume that the clients generate an average of 10 frames per second each. These frames are an average of 800 bytes (6400 bits) long and are distributed as follows: 50 percent are destined for the server, 20 percent are destined for the Internet or intranet (i.e., the router), 20 percent are peer-to-peer between clients, and 10 percent are broadcast or multicast frames. Assuming every frame destined off net or for the server generates a response of equal size, the LAN has the following load:

  • 64 kbps per client (5.76 Mbps from 90 clients)
  • 2.88 Mbps in server responses
  • 1.152 Mbps in responses from the Internet or intranet

This 10 Mbps Ethernet is carrying a total of 9.792 Mbps of traffic from 92 attached devices, and is catastrophically overburdened. If the network is reconfigured as depicted (at a cost of one LAN switch), and the distribution of devices across the collision domains achieves the 80/20 rule for peer-to-peer traffic, then each of these shared LANs now has the following load characteristics:

  • 64 kbps from each attached client (1.92 Mbps from 30 clients)
  • 960 kbps in server responses
  • 384 kbps in responses from the Internet or intranet
  • 384 kbps in broadcasts/multicasts from clients on other segments
  • 384–768 kbps in peer-to-peer transmissions originating on other segments

The three collision domains where clients are connected are now supporting 4.032–4.416 Mbps of traffic. The server receives and transmits 2.88 Mbps, but is doing so on a dedicated, full duplex Fast Ethernet link (i.e., 100 Mbps in each direction). The router still receives and transmits 1.152 Mbps, but is also doing so on a dedicated, full-duplex link. Further performance improvements can be achieved on shared segments by deploying more of them and reducing the client count per collision domain, assuming that a division of devices can be derived that keeps a high percentage of the peer-to-peer traffic on the local segment.

A SAN Scenario

A SAN Scenario

Ethernet switches are becoming common in the storage area network (SAN). Although SANs have been the province of such technologies as Fibre Channel, FICON, and ESCON since their inception, the emergence of quality of service (QoS) capabilities in Ethernet switches has opened the door to the use of these switches for SANs as well.

The switches used in this context are typically GigE switches with QoS capabilities, and the penetration of this technology into the SAN market is only just beginning. But with major players like EMC moving aggressively in this direction, and the price points of Ethernet switches vs. Fibre Channel switches, this is a trend expected to continue and accelerate.

Whatever Ethernet touches, it consumes.

A MAN Scenario

A MAN Scenario

Ethernet switches are being used in the central office (CO), as more and more carriers roll out metro-scale Ethernet switch services. The switches in the central office are high-end enterprise-scale switches like the Cisco Catalyst 6500 series or the Extreme Networks BlackDiamond 10808.

Because of the datagram nature of a switched Ethernet network, these switches and networks implement VLAN technology and provide services as standardized by the Metro Ethernet Forum (MEF). The switches also support quality of service (QoS) capabilities.

These services are very popular with customers. Ethernet is a familiar technology to them. The economies of scale of Ethernet are significant which means the services are relatively low cost (some current service offerings have price points as low as $3–4 per megabit per month for GigE services), and the customer equipment is low cost as well. Examples include Verizon’s Switched Ethernet Service (SES), SBC’s OptiMan, and BellSouth’s Native Mode LAN Interconnection (NMLI). Many smaller scale carriers are offering switched Ethernet services in competition with the major carriers' offerings in major metropolitan locations.

A WAN Scenario

A WAN Scenario

With the emergence of more and more point-to-point Ethernet private line services, a new model for WAN interconnectivity is possible. In the scenario depicted on the visual, if the WAN circuits were conventional leased lines (e.g., DS-1, DS-3, SONET), the three branch offices must implement routers for the sole purpose of interconnecting the local network to the WAN link.

With Ethernet circuits in the WAN, the branch office switch can play a new role: the access point to the WAN. This eliminates a piece of equipment that must be maintained, simplifies the Layer 3 addressing issues, and provides a single point of management for the branch office.