Developed in the 1940s, the first computers were mainframes and were largely confined to research and development. By the late 1950s, however, some government and corporate environments were using computers. Highly specialized, large, and expensive, these centralized systems occupied entire rooms and required significant environmental control.
Users physically brought programs and data to the mainframe, usually on punched cards or paper tapes. Input/Output (I/O) devices read the jobs into the computer. The central processing unit (CPU) would then process the jobs sequentially to completion, hence the term “batch processing.” Turnaround times were typically quite long (e.g., overnight). These early systems were not interconnected. The processor only had to communicate with peripheral I/O devices (e.g., card readers and printers) over short distances at relatively low speeds.
In the 1960s and 1970s the nature of processing changed from batch processing to interactive timesharing. Users were directly connected to the computer via “dumb” terminals—I/O devices with no processing power. Each user perceived dedicated computer resources, but in reality, they were sharing the processing power of the mainframe with other users. The network associated with systems in this era was minimal and focused on connecting terminals to the mainframe, a truly hierarchical model. Link speeds of 300 bits per second (bps), or 30 characters per second, were acceptable because few humans could type or read faster than that.
As the number of terminals increased, devices called “terminal servers” or “cluster controllers” were developed to concentrate terminals so they could share a common transmission facility to the host. These multiplexers were installed wherever a group or a cluster of terminals was to be deployed, such as in a branch office of a large corporation.
In 1974, IBM developed a networking blueprint called Systems Network Architecture (SNA). SNA is a set of rules or protocols that describe how IBM mainframes and terminal devices communicate. The architecture is hierarchical. The mainframe runs software that connects users to different applications. Users access the host via local or remote terminals connected via terminal servers, which IBM calls cluster controllers (CC).
Cluster controllers are rarely connected to the host directly. They connect to another computer, called the front end processor (FEP) or communications controller (COMC), which is in turn connected to the host. The COMC offloads communications functions from the mainframe. The COMC “owns” multiple cluster controllers, oversees their communication, and multiplexes their traffic on a common link to the host. It is also possible to connect two or more mainframes and COMCs with each other, requiring the COMC to perform routing functions. Although the mainframe industry has lost market share to vendors of smaller systems, large and expensive mainframe systems are still with us today and are not likely to disappear soon. Today the mainframe is just one component in the corporate computing structure and usually referred to as a mainframe server or an enterprise server.