Transmission Control Protocol
The Transmission Control Protocol (TCP) is designed to operate over a wide variety of network topologies and provide connection-oriented service with guaranteed delivery and sequentiality. TCP is necessary to provide reliable interprocess communication to higher layers over a possibly unreliable subnetwork.
- 1 TCP services
- 2 Uses of TCP
- 3 The TCP Header
- 4 TCP Connections
- 5 PodSnacks
Upper layers provide a data stream for transmission across the network (i.e., TCP inserts no record markers). The stream is a sequence of octets (or bytes). The destination TCP entity passes the stream to the destination application in the same sequence. In effect, TCP provides an ordered pipe that allows one application entity to pass information to another.
TCP defines a three phase communication protocol. The first phase involves the establishment of a connection with the destination application. This is analogous to the process of placing a telephone call (although there is no actual circuit placed inside the network). We must first establish a connection to the destination host. Once the connection has been established, data can be transferred. In our phone call analogy this is equivalent to the act of having a conversation. When data transfer is complete the connection must be terminated (i.e., the phone must be returned to its cradle). This frees network resources for other connections.
TCP implements a buffered transmission stream. When a connection is established two buffers are allocated at both ends of the connection. One is a transmit buffer and the other is a receive buffer. When the application submits data to be transmitted by TCP, TCP buffers it in the transmit buffer until it has accumulated some pre-established minimum amount or until a timer expires, whichever comes first. When TCP is ready to transmit, it assembles a unit of transfer called a segment and sends it across the network using the services of the Internet protocol (IP). When the data arrives at the other end of the connection, it is buffered in the receive buffer. The application can remove it from this receive buffer as needed, which is not necessarily in the same size units that were transmitted.
TCP treats the stream as an unstructured sequence of octets. Any implied meaning or order must be understood between two communicating entities.
Connections provided by TCP allow concurrent transfer of data in both directions, forming a full-duplex stream. From the perspective of two communicating entities, the full-duplex stream appears as two parallel connections with no interaction between them. This gives peer TCP entities the ability to piggyback control information concerning one direction of communication on segments traveling in the opposite direction.
Uses of TCP
Most Internet applications use TCP services. The applications fall into two categories: those that maintain connection states for clients (stateful applications) and those that offer file transfer services. Among the stateful applications, Web surfing and email are the most popular, but instant messaging (IM) and the remote access protocols (e.g., Telnet and Secure Shell) are also commonly found. File transfer applications include the File Transfer Protocol (FTP), the Network News Transfer Protocol (NNTP), and the HyperText Transfer Protocol (HTTP). From a typical home or work PC, most traffic is carried in TCP messages.
The TCP Header
TCP treats the data it receives from the Application Services Layer as though it were a continuous stream of bytes. TCP divides this byte stream into segments for transmission via the network to a peer TCP entity in the destination host. Besides transferring data, segments can also establish, maintain, or terminate a connection.
The header has a number of basic fields and then space is provided for options. The basic fields, together with the Options field, must equal a length that is a multiple of four octets because the Data Offset field, which specifies the header length, records the length of the header in 32 bit (4 octet) words. Like IP, a Pad field is provided to round the header to the next four octet multiple. The basic fields occupy 20 octets; the Options and Pad fields may or may not be present.
At call establishment, TCP uses the Options field to negotiate the maximum segment size (MSS) for that connection. It is important to choose this value carefully—a maximum size that is too large or too small can adversely affect the efficiency of the network. MSS is, in fact, one of the only defined TCP options; if absent, an MSS of 536 octets is assumed. (Several other options are under discussion, but not widely implemented.)
For example, an MSS of over 1500 octets in an Ethernet environment means that the network will need to fragment the segment, increasing the probability of communication failure and the need for retransmission. Alternatively, an MSS of 40 will result in small IP packets that are not fragmented. However, every IP datagram will contain 40 octets of header (i.e., 20 for TCP, 20 for IP) and 40 octets of data, yielding 50% network efficiency.
While the TCP header does not define a Segment Length field (and hence does not explicitly limit the size of a segment), there is an implicit maximum segment length. IP packets cannot exceed 65,535 (64 k) octets and TCP segments are carried inside IP packets.
The six bits immediately following the Data Offset field are reserved and not currently used.
The visual depicts how the TCP establishes connections. Note that TCP uses a three-way handshake to reduce the probability of delayed duplicates.
Before any TCP-based application can accept a connection, a passive open is executed by the server process. This simply alerts the local TCP process that a particular application is available and waiting for connections. TCP will automatically reject any connection request to a port number that has not been passively opened. Notice that the passive open is a local operation (in this case, at Host B) and does not result in any network traffic.
When a client process (at Host A) wants to open a connection to a server, it executes an active open. Unlike a passive open, the active open results in the assignment of a TCP port and a connection establishment process is initiated. Host A sends a segment with the SYN bit (in the Control Flags field) set and the sequence number it wants to start with. The initial sequence number (ISN) is actually based on a reference to the host’s system clock and last sequence number. Use of the system clock value reduces the probability that a system crash will cause communication problems at the destination host when the source comes back online and tries to reestablish its connections.
Host B responds with a segment that has both SYN and ACK bits set. The SYN bit indicates the presence of an ISN on the reverse path; the ACK bit acknowledges receipt of the initiator’s synchronization request. Note that the acknowledgment number is one larger than the ISN. Even though the segment does not contain data, TCP needs to increment the sequence number by one to prevent confusion in the event of segment loss during the connection establishment process. Even though Host A negotiated for sequence number 54, the first segment of data sent by Host A will actually use sequence number 55.
The final communication is from Host A. It consists of a segment with the ACK bit set to acknowledge Host B’s synchronization request. Again, note the incremented sequence number. Once the three-way communication is complete, the connection is established. After this point all segments will contain a set ACK bit and an unset SYN bit.
A TCP connection is peer-to-peer, full duplex communication. There is no primary/secondary relationship between the stations.
(Note: In all scenarios shown here, the presence of an ack value implies that the ACK bit has been set; if no ack value is shown, it means that the ACK bit has been cleared.)
The visual depicts an exchange of data between two hosts using an established TCP connection. Let’s examine the process.
The first three events are requests from the client application on Host A to transmit data to Host B. For this connection, the data portion of the segment is set to 500 octets. The first SEND request contains 50 octets of data and the second contains 100 octets. The TCP process at Host A buffers both of these blocks of data (per Nagle’s Algorithm) because there’s not enough to transmit. The next SEND is a request to transmit 400 octets. The Host A TCP process now has 550 octets of data, enough to generate a segment and transmit it. This is generated with segment number 55, which also contains an acknowledgment verifying that sequence number 401 is the first expected in the reverse direction.
Host B, upon receiving the segment, generates its own segment with acknowledgment number 555, which indicates that octets 55 through 554 arrived safely and octet 555 is expected next. Shortly after, the server application accesses and retrieves the 500 octets from the receive buffer.
In short order, both the client and server application processes request a transmission. The client requests that an additional 350 octets be transmitted. The server requests that 600 octets be transmitted. The 350 octets submitted by the client results in a total of 400 octets buffered for transmission. The 600 octets submitted by the server result in the transmission of a 500 octet segment with sequence number 401.
After Host B’s transmission arrives at Host A, a segment is transmitted in the reverse direction. The transmission could have been prompted by the expiration of the buffer timer (i.e., the transmit buffer has had untransmitted data for too long) or by the expiration of the acknowledgment timer (i.e., Host A cannot wait any longer to acknowledge the received segment). In either case, the final 400 octets are transmitted and the acknowledgment to Host B’s transmission is carried in the segment.
The same process occurs when this segment arrives at Host B. Note the progression of the sequence numbers and acknowledgments on the accompanying visual. The final transmission originates from Host A and is strictly an acknowledgment of Host B’s final segment.
Data Transfer with Loss
The visual models a data transfer in which a segment is lost. The request by the Host A application to transfer 1500 octets of data results in the generation of three segments with 500 octets each. In this scenario, the second of these segments is lost in the network. Host B, meanwhile, acknowledges the receipt of Host A’s first segment while sending data of its own back to Host A. When Host B receives the third segment, it buffers it, but does not acknowledge it because there are 500 octets missing. If Host B had acknowledged the third segment with sequence number 2455, it would imply that all octets through 2454 had arrived safely, which is clearly not the case.
After a period of time governed by an acknowledgment timer, Host A retransmits the second segment. When Host B receives the retransmitted segment, it can acknowledge both the second and third segments by using acknowledgment number 2455. If Host B’s segment does not arrive at Host A before the acknowledgment timer on the third segment expires, Host A can retransmit the third segment. In the example on the visual, this event does not occur; if it did, Host B would simply ignore the duplicate segment.
This series of events is also essentially what occurs when an acknowledgment is lost. In some cases, the acknowledgment for a subsequent segment will make up the loss. If it does not arrive in time, however, the source will retransmit the segment. This might prompt a repeat of the acknowledgment, but the receiver will discard the duplicate segment.
The visual depicts the operation of the sliding window protocol on a TCP connection. In this scenario, the receive buffer at both ends of the connection is set to 1000 octets. When the TCP process at Host A receives a request from the local application to transmit 1500 octets, it immediately transmits two 500 octet segments. At this point it has filled the window and is obliged to stop.
When Host B receives the first segment, it acknowledges the segment and reduces the window to 500 octets. This is because the 500 octets just received have been placed in the receive buffer, but have not yet been removed by the local application. Therefore, there are only 500 octets remaining in the buffer, so Host B reduces the window. When this acknowledgment arrives at Host A, the TCP process at Host A can free the transmit buffer space occupied by the acknowledged segment, but cannot transmit any more data.
Meanwhile, the application at Host B accesses the receive buffer and removes 300 octets. The buffer now has 800 octets free. When the next segment arrives it is placed in the buffer, leaving 300 octets free; Host B’s acknowledgment informs Host A of this fact. Since Host A has no unacknowledged data in the network, it can go ahead and send another 300 octets of data.
Meanwhile, the application at Host B retrieves another 600 octets from the receive buffer, leaving 900 octets free. When the next 300 octet segment arrives from Host A, the free buffer space is reduced to 600 octets. The acknowledgment indicates this. Host A now has permission to transmit the remainder of the original 1500 octets. “Silly window syndrome” is a potential problem with this approach.
In addition to flow control by the receiver, the transmitter exerts flow control on itself. Whenever an acknowledgment times out, the transmitter assumes the problem is one of congestion and responds by doing two things. It reduces its own window size and increases its acknowledgment timeout value, ultimately reducing the amount of traffic the transmitter introduces into the network and increasing the transmitter’s tolerance for network delay . When acknowledgments are received, the window is increased (up to the limit established by the receiver) and the acknowledgment time reduced. To keep the connection stable and prevent wild oscillations from little to a lot of transmission, TCP slows itself more quickly than it reopens its communication channel. Many well-documented algorithms govern this process, but these algorithms are beyond the scope of this course.
The procedure for closing a connection is a modified version of the three-way handshake used to open a connection. Recall that the TCP connection is full duplex and, in essence, comprises two independent data streams, one in each direction.
When an application informs the TCP process that there is no more data to be transmitted and it wants to close the connection, the TCP process sends a segment with the FIN bit set. The receiving TCP entity immediately acknowledges the receipt of this segment and informs the local application that no more data is expected. TCP no longer accepts data for transmission in this direction. Note that the FIN acknowledgment causes the sequence number to be incremented.
In the scenario shown here, the client has closed its direction of transmission on the TCP virtual circuit. The server now has the option of transmitting more information and then closing the connection, or closing the connection immediately. Closure of the other data stream is accomplished by issuing a segment with the FIN bit set, which is immediately acknowledged. The connection is now closed in both directions in an orderly fashion.
The connection can be abruptly closed in both directions with the RST bit. This simple closure can result in loss of data, so it should be used only in emergencies.
|<mp3>http://podcast.hill-vt.com/podsnacks/2007q1/tcp.mp3%7Cdownload</mp3> | Transmission Control Protocol|