Internet History: ARPANET - Subnet

Internet History: ARPANET - Subnet

Other articles in the series:

Using ARPANET, Robert Taylor and Larry Roberts were going to unite many different research institutes, each of which had its own computer, for the software and hardware of which he was solely responsible. However, the software and hardware of the network itself was located in a hazy middle region, and did not belong to any of these places. Between 1967 and 1968, Roberts, head of the Information Processing Technology Office (IPTO) networking project, had to decide who would build and maintain the network and where the boundaries between network and institutions should lie.

Skeptics

The problem of structuring the network was at least as political as it was technical. The scientific directors of the ARPA research centers generally did not approve of the ARPANET idea. Some have clearly shown any lack of desire to join the network at any time; few of them were enthusiastic. Each center would have to make a serious effort to allow others to use their very expensive and very rare computer. This granting of access showed clear disadvantages (the loss of a valuable resource), while its potential merits remained vague and vague.

The same skepticism about resource sharing sank the UCLA networking project a few years ago. In this case, however, ARPA had much more leverage, since it paid directly for all these valuable computer resources, and continued to have a hand in all the cash flows of their associated research programs. And although no direct threats were made, no β€œor else” was voiced, the situation was crystal clear - one way or another, but ARPA was going to build its network in order to unite the machines that, in practice, still belonged to him.

The moment came at a meeting of scientific leaders at Att Arbor, Michigan, in the spring of 1967. Roberts presented his plan for a network connecting the various computers at each of the centers. He announced that every executive would provide his local computer with special networking software that he would use to call other computers over the telephone network (this was before Roberts even knew about the idea packet switching). The answer was strife and fear. Among the least inclined to implement this idea were the largest centers, which already had large projects sponsored by IPTO, among which MIT was the main one. MIT researchers, basking in the money they received to develop the Project MAC time-sharing system and the artificial intelligence lab, saw no advantage in sharing their hard-earned resources with riffraff from the west.

And, regardless of their status, each center nurtured its own ideas. Each had their own unique software and equipment, and it was hard to see how they could even make the slightest connection with each other, let alone actually work together. Just writing and running network programs for their machine will eat up a significant amount of their time and computing resources.

Ironically, but surprisingly appropriate, Roberts' solution to these social and technical problems came from Wes Clark, a man who disliked both time-sharing and networks. Clark, a proponent of the quixotic idea of ​​giving a personal computer to every person, had absolutely no intention of sharing computer resources with anyone, and kept his own campus, Washington University in St. Louis, away from the ARPANET network for many more years. Therefore, it is not surprising that it was he who developed the network project, which does not add a significant load on the computing resources of each of the centers, and does not require each of them to spend energy on creating special software.

Clarke suggested placing a minicomputer in each of the centers that handled all the functions directly related to the network. Each center had only to figure out how to connect to its local assistant (later called interface message processors, or IMP), which then sends the message along the desired route so that it reaches the appropriate IMP at the receiving location. In essence, he proposed that ARPA distribute additional free computers to each center, which would take over most of the network's resources. At a time when computers were still rare and very expensive, this proposal was audacious. However, just then minicomputers began to appear, costing only a few tens of thousands of dollars, instead of several hundred, and in the end the proposal turned out to be feasible in principle (in the end, each IMP cost $45, or about $000 in today's money).

The IMP approach, which alleviated the concerns of scientific leaders about the network load on their computing power, also solved another, political problem of ARPA. Unlike the rest of the agency's projects at the time, the network was not limited to a single research center, where it would be run by a single boss. And ARPA itself did not have the ability to independently directly create and manage a large-scale technical project. She would have to hire outside companies to do this. The presence of the IMP provided a clear demarcation of responsibility between the network managed by the external agent and the locally managed computer. The contractor would control the IMPs and everything in between, and the centers would remain in charge of the hardware and software on their own computers.

IMP

After that, Roberts had to choose this contractor. Licklider's old-fashioned approach of coaxing an offer from his favorite researcher didn't work directly in this case. The project was required to be put up for public auction, like any other government contract.

It wasn't until July 1968 that Roberts was able to settle the final details of the bid. About half a year has passed since the last technical piece of the puzzle fell into place when the packet switching system was discussed at the Gatlinburg conference. The two largest computer manufacturers, Control Data Corporation (CDC) and International Business Machines (IBM), immediately withdrew because they did not have low-cost minicomputers suitable for the IMP role.

Internet History: ARPANET - Subnet
Honeywell DDP-516

Among the remaining participants, the majority chose a new computer DDP-516 from Honeywell, although some leaned in favor of Digital PDP-8. The Honeywell option was particularly attractive because it had an I/O interface specifically designed for real-time systems for applications such as industrial plant control. Communication, of course, also required appropriate accuracy - if the computer missed an incoming message while busy with other work, there was no second chance to catch it.

By the end of the year, after seriously considering Raytheon, Roberts assigned the task to the growing Cambridge firm founded by Bolt, Beranek and Newman. The family tree of interactive computing by that time was extremely grown, and for the choice of BBN Roberts could well be accused of nepotism. Licklider brought interactive computing to BBN before becoming IPTO's first director, planting the seeds of his intergalactic network, and raising people like Roberts. Without Leek's influence, ARPA and BBN would not have been interested or able to service the ARPANET project. Moreover, a key part of the team assembled by BBN to create the IMP-based network came directly or indirectly from Lincoln Labs: Frank Hart (team leader), Dave Walden, Will Crowther and Severo Ornstein. It was in the labs that Roberts himself was in graduate school, and it was there that Lick's chance encounter with Wes Clark sparked his interest in interactive computers.

But while this may look like a conspiracy, the BBN team was in fact just as well equipped for real-time operation as the Honeywell 516. At Lincoln they were working on computers connected to radar systems - another example of an application in which the data will not wait until the computer is ready. Hart, for example, worked on the Whirlwind computer as a student in the 1950s, joined the SAGE project, and spent a total of 15 years at Lincoln Labs. Ornstein worked on the cross-protocol SAGE, which transferred radar tracking data from one computer to another, and later on Wes Clark's LINC, a computer designed to help scientists work with data online in the lab. Crowther, now best known as the author of the text game Colossal Cave Adventure, spent ten years building real-time systems, including the Lincoln Experimental Terminal, a mobile satellite communications station with a small computer that controlled the antenna and processed the incoming signals.

Internet History: ARPANET - Subnet
IMP team at BBN. Frank Hart is the man at the center of older age. Ornstein stands on the right edge, next to Crowther.

The IMP was responsible for understanding and managing the routing and delivery of messages from one computer to another. The computer could send up to 8000 bytes at a time to the local IMP, along with the destination address. The IMP then sliced ​​the message into smaller packets, which were independently transmitted to the target IMP, over 50 kbps lines leased from AT&T. The receiving IMP pieced the message together and delivered it to its computer. Each IMP kept a table that kept track of which of its neighbors had the fastest route to reach any possible destination. It was dynamically updated based on information received from these neighbors, including information that the neighbor was unreachable (in which case the delay to send in that direction was considered infinite). To meet the speed and throughput requirements put forward by Roberts for all these processing processes, Hart's team created code at the level of a work of art. The entire processing program for the IMP took only 12 bytes; the part that dealt with the routing tables was only 000.

The team also took several precautions, given that it was impractical to assign a support team to each IMP in the field.

First, they equipped each computer with devices for remote monitoring and control. In addition to the automatic restart that started after every power outage, the IMPs were programmed to be able to restart neighbors by sending them new versions of the operating software. To help with debugging and analysis, the IMP could, on command, start taking snapshots of its current state at regular intervals. IMP also attached a part to each package to track it, which made it possible to write more detailed work logs. With all these possibilities, many problems could be solved directly from the BBN office, which served as a control center from which one could see the status of the entire network.

Second, they asked Honeywell for a military version of the 516 computer, equipped with a thick chassis that protected it from vibrations and other threats. BBN basically wanted to make it a "stay away" sign for curious graduate students, but nothing more clearly demarcated the boundary between local computers and the BBN-managed subnet than this armored hull.

The first reinforced cabinets, roughly the size of refrigerators, arrived on site at the University of California, Los Angeles (UCLA) on August 30, 1969, just 8 months after BBN received its contract.

Hosts

Roberts decided to start the network with four hosts - in addition to UCLA, an IMP will be installed not far up the coast at the University of California at Santa Barbara (UCSB), another at the Stanford Research Institute (SRI) in northern California, and the last one at the University of Utah. They were all second-rate West Coast institutions trying to make a name for themselves in the field of scientific computing. The family ties continued to work as two of the academic advisors, Len Kleinrock from UCLA and Ivan Sutherland from the University of Utah, were also old colleagues of Roberts at Lincoln Labs.

Two of the hosts were given additional network-related functions by Roberts. Doug Englebart of SRI volunteered to set up a network information center back in 1967 at a meeting of leaders. Using SRI's sophisticated information retrieval system, he set out to create the ARPANET telephone directory: an ordered collection of information on all the resources available on various nodes, and make it available to all participants in the network. Given Kleinrock's experience in network traffic analysis, Roberts appointed UCLA as the Network Activity Measurement Center (NMC). For Kleinrock and UCLA, ARPANET was to be not only a practical tool, but also an experiment from which data could be extracted and generalized in order to apply the knowledge gained to improve the design of the network and its followers.

But more important to the development of ARPANET than these two appointments has become a more informal and diffuse community of graduate students called the Network Working Group (NWG). The subnet of the IMP allowed any host on the network to reliably deliver a message to any other; the task of the NWG was to develop a common language, or set of languages, that the hosts could use to communicate. They called them "host protocols". The name "protocol", borrowed from diplomats, was first applied to networks in 1965 by Roberts and Tom Marill to describe both the data format and the algorithmic steps that govern how two computers communicate with each other.

The NWG, under the informal but de facto leadership of UCLA's Steve Crocker, began meeting regularly in the spring of 1969, about six months before the first IMP appeared. Born and raised in the Los Angeles area, Crocker attended Van Nuys High School, being the same age as two of his future NWG colleagues, Vint Cerf and Jon Postel. To record the outcome of some of the group's meetings, Crocker developed one of the cornerstones of ARPANET culture (and the future of the Internet), request for comments [working proposal] (RFC). His RFC 1, published on April 7, 1969, and circulated to all future ARPANET nodes via classic mail, collected the group's early discussions about software design for the Host Protocol. In RFC 3, Crocker continued the description by very vaguely defining the process for drafting all future RFCs:

It is better to send comments on time than to bring them to perfection. Philosophical opinions are accepted without examples or other specifics, certain proposals or implementation technologies without an introductory description or contextual explanations, specific questions without trying to answer them. The minimum length for an NWG note is one sentence. We hope to facilitate an exchange of views and discussions about informal ideas.

Like request for quotation (RFQ), the standard way to request bids on government contracts, the RFC welcomed any response, but unlike the RFQ, it also invited dialogue. Everyone in the distributed NWG community could submit an RFC, and use this opportunity to debate, ask a question, or critique a previous proposal. Of course, as in any community, some opinions were prioritized over others, and in the early days, the opinion of Crocker and his core group of associates was held in very high regard. In July 1971, Crocker left UCLA while still a graduate student to take a position as program manager at IPTO. With key research grants from ARPA at his disposal, he had, wittingly or unwittingly, an undeniable influence.

Internet History: ARPANET - Subnet
Jon Postel, Steve Crocker and Vint Cerf are NWG classmates and colleagues; later years

The NWG's original plan called for two protocols. Remote login (telnet) allowed one computer to act as a terminal connected to another's operating system, extending the interactive environment of any ARPANET member system, time-shared over thousands of kilometers, to anyone on the network. The FTP file transfer protocol allowed one computer to transfer a file, such as a useful program or set of data, to or from another system's storage. However, at the urging of Roberts, the NWG added a third basic protocol to the base of the two, establishing a basic connection between two hosts. It was called the Network Control Program (NCP). The network now had three layers of abstraction - a packet subnet managed by the IMP at the bottom, host-to-host communication provided by NCP in the middle, and application protocols (FTP and telnet) at the top.

Failure?

It was not until August 1971 that NCP was fully defined and implemented throughout the network, which at that time consisted of fifteen nodes. Implementations of the telnet protocol soon followed, and the first stable definition of FTP appeared a year later, in the summer of 1972. If you evaluate the state of the ARPANET at that time, a few years after it was first launched, then it could be considered a failure, compared with the dream of separation resources that Licklider envisioned and put into practice by his protΓ©gΓ©, Robert Taylor.

For starters, it was just hard to figure out what resources existed online that could be used. The network's clearing house used a model of voluntary participation - each node had to supply itself with updated information about the availability of data and programs. And although everyone would benefit from such actions, there was no strong motivation for each individual node to advertise their resources and provide access to them, let alone provide up-to-date documentation or advice. Therefore, NIC failed to become a network directory. Probably its most important function in its early years was to provide electronic posting of the growing set of RFCs.

Even if, say, Alice at UCLA knew about the availability of a useful resource at MIT, there was a more serious obstacle. Telnet allowed Alice to the MIT login screen, but no further. In order for Alice to actually get access to any program at MIT, she first had to arrange offline with MIT to set up an account for her on their computer, which usually required filling out paper forms at both institutes and a funding agreement to pay for use of MIT computer resources. And due to incompatibilities between hardware and system software between nodes, file transfers often didn't make much sense, since you couldn't run programs from remote computers on yours.

Ironically, the most significant success of resource sharing was not in the realm of interactive time sharing, for which the ARPANET was created, but in the realm of old-fashioned non-interactive data processing. UCLA added its idle IBM 360/91 batch machine to the network, and provided telephone consultations to support remote users, which generated significant revenue for the computer center. The ARPA-sponsored ILLIAC IV supercomputer at the University of Illinois and the Datacomputer at the Computer Corporation of America in Cambridge also found remote clients via the ARPANET.

But all these projects have not come close to fully exploiting the network. In the fall of 1971, with 15 hosts online, the network as a whole was transmitting an average of 45 million bits per node, or 520 bits per second, over a network of 50 bits per second leased lines from AT&T. What's more, most of this traffic was test traffic, generated by the network measurement center at UCLA. Aside from the enthusiasm of some of the early adopters (such as Steve Carr, who used a PDP-000 daily at the University of Utah in Palo Alto), little happened on the ARPANET. From a modern point of view, perhaps the most interesting event was the launch of the Project Guttenberg digital library in December 10, organized by Michael Hart, a student at the University of Illinois.

But ARPANET was soon rescued from the accusations of decay by a third application protocol, a little thing called email.

What else to read

β€’ Janet Abbate, Inventing the Internet (1999)
β€’ Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996)

Source: habr.com

Add a comment