The Rise of the Internet Part 1: Exponential Growth

The Rise of the Internet Part 1: Exponential Growth

<< Before this: Fragmentation Era Part 4: Anarchists

In 1990 John Quaterman, a networking consultant and UNIX expert, published a comprehensive overview of the state of computer networking at the time. In a short section on the future of computing, he predicted the emergence of a single global network for "e-mail, conferencing, file transfer, remote login - just as there is a worldwide telephone network and worldwide mail today." However, he did not give a special role to the Internet. He suggested that this worldwide network would "most likely be operated by government communications services" other than the US, "where it would be operated by regional divisions of the Bell Operating Companies and long-distance carriers."

The purpose of this article will be to explain how, with its sudden explosive exponential growth, the Internet so rudely disproved perfectly natural assumptions.

handover

The first critical event leading to the emergence of the modern Internet occurred in the early 1980s, when the Defense Communications Agency (DCA) [now DISA] decided to split the ARPANET in two. DCA took over control of the network in 1975. By then, it was clear that there was no point for ARPA's Information Processing Technology Office (IPTO), an organization dedicated to researching theoretical ideas, to participate in the development of a network that was used not for researching communications, but for everyday communication. ARPA unsuccessfully attempted to shove control of the privately held AT&T network. DCA, in charge of military communications systems, seemed like the best second option.

For the first few years of the new situation, ARPANET thrived in blissful ignorance. By the early 1980s, however, the Department of Defense's aging communications infrastructure was in desperate need of an upgrade. The proposed replacement project, AUTODIN II, for which DCA selected Western Union as contractor, appears to have failed. Then the heads of the DCA put Colonel Heidi Hayden in charge of choosing an alternative. He proposed using the packet-switching technology that was already in the possession of the DCA in the form of ARPANET as the basis for the new defense data transmission network.

However, there was an obvious problem with the transfer of military data over the ARPANET - the network was replete with long-haired scientists, some of whom actively opposed computer security or secrecy - for example, Richard Stallman with fellow hackers at the MIT AI Lab. Hayden suggested splitting the network in two. He decided to leave the ARPA-funded research scientists on the ARPANET, and spin off the computers working in the defense industry to a new network called MILNET. This mitosis had two important consequences. First, the division of the military and non-military parts of the network was the first step towards the transfer of the Internet under civilian, and later under private management. Second, it was proof of the viability of a seminal Internet technology, the TCP/IP protocols, first invented five years earlier. DCA needed all ARPANET nodes to migrate from the old protocols to support TCP/IP by early 1983. At that time, few networks used TCP/IP, but after that process, the two networks of the proto-internet were connected, which allowed message traffic to link research and military enterprises if necessary. To ensure the longevity of TCP/IP in military networks, Hayden set up a $20 million fund to support computer manufacturers who would write software to implement TCP/IP on their systems.

The first step in the gradual transition of the Internet from military to private control also gives us a good opportunity to say goodbye to ARPA and IPTO. Its funding and influence, under Joseph Carl Robnett Licklider, Ivan Sutherland, and Robert Taylor, led directly and indirectly to all of the early developments in interactive computing and computer networking. However, with the creation of the TCP/IP standard in the mid-1970s, it played a key role in the history of computers for the last time.

The next major computing project sponsored by DARPA will be the 2004-2005 Autonomous Vehicle Competition. The most famous project before that will be the billion-dollar AI-powered strategic computing initiative of the 1980s, which will spawn some useful military applications but have little to no effect on civil society.

The decisive catalyst in the loss of influence of the organization was Vietnam War. Most academic researchers thought they were fighting for a just cause and defending democracy when the military was funding Cold War-era research. However, those who grew up in the 1950s and 1960s lost faith in the military and its goals after the latter got bogged down in the Vietnam War. Among the first was Taylor himself, who retired from IPTO in 1969, taking his ideas and connections to Xerox PARC. The Democratic-controlled Congress, concerned about the damaging impact of military money on basic scientific research, passed amendments that would require defense money to be spent solely on military research. ARPA reflected this change in funding culture in 1972 by renaming itself DARPA − Department of Defense Advanced Research Projects Office.

Therefore, the baton passed to the civil national science foundation (NSF). By 1980, with a budget of $20 million, NSF was responsible for funding about half of the federal computer research programs in the US. And most of these funds will soon be directed to a new national computer network NSFNET.

NSFNET

In the early 1980s, Larry Smurr, a physicist at the University of Illinois, visited the Illinois Institute. Max Planck in Munich, where the Cray supercomputer worked, to which European researchers were allowed access. Frustrated by the lack of similar resources for US scientists, he suggested that NSF fund the creation of several supercomputing centers around the country. The organization responded to the claims of Smarr and other researchers with similar complaints by creating a department for advanced scientific computing in 1984, which led to the funding of five such centers with a five-year budget of $42 million. They stretched from Cornell University in the northeast of the country to San Diego. in the South-West. Between them, the University of Illinois, where Smurr worked, got its own center, the National Center for Supercomputing Applications, NCSA.

However, the ability of the centers to improve access to computing power was limited. Using their computers for users who do not live near one of the five centers would be difficult, and would require funding for semester-long or summer-long research trips. So NSF decided to build a computer network as well. History repeated itself - Taylor promoted the creation of the ARPANET in the late 1960s precisely in order to give the research community access to powerful computing resources. NSF will provide a backbone that will connect key supercomputing centers, stretch across the continent, and then connect regional grids that give access to these centers to other universities and scientific laboratories. The NSF will take advantage of the Internet protocols promoted by Hayden by transferring responsibility for building local networks to local scientific communities.

The NSF initially outsourced the creation and maintenance of the network to NCSA from the University of Illinois as the originator of the original proposal for a national supercomputing program. In return, NCSA leased the same 56 kbps links that ARPANET had been using since 1969 and launched the network in 1986. However, these lines quickly became clogged with traffic (details of this process can be found in the work of David Mills "NSFNET backbone"). And the history of ARPANET was repeated again - it quickly became obvious that the main task of the network should not be the access of scientists to computer power, but the exchange of messages between people who had access to it. One can forgive the authors of ARPANET for not knowing that this can happen - but how could the same mistake be repeated almost twenty years later? One possible explanation is that it is much easier to justify a seven-figure grant for the use of eight-figure computing power than to justify spending such sums on seemingly such frivolous goals, like the ability to exchange e-mails.This is not to say that the NSF deliberately misled someone.But how the anthropic principle states that the physical constants of the universe are what they are, because otherwise we simply would not exist, and we If they couldn't be observed, I wouldn't have to write about a publicly funded computer network if there weren't similar, somewhat fictitious justifications for its existence.

Convinced that the network itself was at least as valuable as the supercomputers that justified its existence, NSF turned to outside help to upgrade the backbone of the network by installing T1 (1,5 Mbps) bandwidth links. /With). The T1 standard was founded by AT&T in the 1960s and was supposed to handle up to 24 telephone calls, each of which was encoded into a 64 kbps digital stream.

The contract was won by Merit Network, Inc. in partnership with MCI and IBM, and received a $58 million grant from NSF in the first five years to build and maintain the network. MCI provided the communications infrastructure, IBM provided the computing power and software for the routers. Merit, a non-profit company that operated the computer network that linked the University of Michigan campuses, brought with it the experience of maintaining a scientific computer network, and gave the entire partnership a university feel that made it more easily accepted by NSF and scientists who used NSFNET. However, the handover from NCSA to Merit was an obvious first step towards privatization.

MERIT originally stood for the Michigan Educational Research Information Triad. The State of Michigan has added $5 million of its own to help T1's home network grow.

The Rise of the Internet Part 1: Exponential Growth

More than ten regional networks traveled through the Merit backbone, from New York's NYSERNet, a research and education network connected in Ithaca to Cornell University, to California's CERFNet, a federal research and education network connected in San Diego. Each of these regional networks connected to countless local campus networks, as hundreds of Unix machines ran in college labs and faculty offices. This federal network of networks has become the seed crystal of the modern Internet. ARPANET only connected well-funded computer science researchers who worked in elite academic institutions. And by 1990, almost any university student or teacher could go online. By bouncing packets from node to node—over a local Ethernet, then further into a regional network, then over long distances at the speed of light over the NSFNET backbone—they could exchange emails or have a formal Usenet conversation with colleagues across the country.

After many more scientific organizations became available through NSFNET than through ARPANET, in 1990 the DCA decommissioned the obsolete network, and completely eliminated the Department of Defense from the development of civilian networks.

Takeoff

During this entire period, the number of computers connected to the NSFNET and its associated networks - and all this we can now call the Internet - has approximately doubled every year. 28 in December 000, 1987 in October 56,000, 1988 in October 159, and so on. This trend continued until the mid-000s, and then the growth slowed down a bit. How, I wonder, given this trend, Quaterman could not see that the Internet was destined to rule the world? If the recent epidemic has taught us anything, it is that it is very difficult for a person to imagine exponential growth, since it does not correspond to anything that we face in everyday life.

Of course, the name and concept of the Internet predates NSFNET. The Internet protocol was invented in 1974, and even before NSFNET, there were networks that communicated over IP. We have already mentioned ARPANET and MILNET. However, I could not find any mention of the "internet" - a single, world-wide network of networks - prior to the advent of the three-tiered NSFNET.

The number of grids within the Internet grew at a similar rate, from 170 in July 1988 to 3500 in the fall of 1991. Since the scientific community knows no boundaries, many of them have been abroad, beginning with links to France and Canada established in 1988. By 1995, almost 100 countries could access the Internet, from Algeria to Vietnam. And although the number of machines and networks is much easier to calculate than the number of real users, a reasonable estimate by the end of 1994 was 10-20 million. In the absence of detailed data on who, why and at what time used the Internet, it is rather difficult to substantiate or some other historical explanation for this incredible growth. A small collection of stories and anecdotes can hardly explain how 1991 computers connected to the Internet from January 1992 to January 350, and 000 the next year, and another 600 million the next.

However, I will venture into this epistemically shaky territory and claim that the three overlapping waves of users, each with their own reasons for connecting, that are responsible for the explosive growth of the Internet, were caused by unforgiving logic. Metcalfe's law, which says that the value (and hence the force of attraction) of the network increases as the square of the number of its participants.

The scientists came first. NSF intentionally distributed the calculations to as many universities as possible. After that, every scientist wanted to join the project because everyone else was already there. If you could not reach the emails, if you could not see and participate in the latest discussions on Usenet, you risked missing the announcement of an important conference, the chance to find a mentor, not noticing cutting-edge research before it was published, and so on. Feeling compelled to join online scientific conversations, universities quickly connected to regional networks that could connect them to the NSFNET backbone. For example, NEARNET, which covered six states in the New England region, had over 1990 members by the early 200s.

Simultaneously, access began to seep from faculty and graduate students to a much larger student community. By 1993, approximately 70% of Harvard freshmen had email addresses. By that time, the Internet at Harvard had physically reached all corners and institutions associated with it. The university went to great expense in order to carry Ethernet not just to every building of an educational institution, but to all student dormitories. There must have been very little time left before one of the students was the first to stumble into their room after a stormy night, fall into a chair and hardly tap an email that they regret sending the next morning - whether it was a declaration of love or a furious rebuke enemy.

On the next wave, around 1990, commercial users began to arrive. That year, 1151 .com domains were registered. The first commercial participants were the research departments of technology companies (Bell Labs, Xerox, IBM, etc.). They, in fact, used the network for scientific purposes. The business communication of their leaders went on other networks. However, by 1994 existed With over 60 names in the .com domain, making money online has begun in earnest.

By the end of the 1980s, computers began to become part of the daily work and home lives of US citizens, and the importance of a digital presence for any serious business became apparent. Email offered a way to easily and extremely quickly communicate with colleagues, customers, and suppliers. Mailing lists and Usenet offered both new ways to stay up to date in the professional community and new forms of very cheap advertising for a wide range of users. Through the Internet it was possible to access a huge variety of free databases - legal, medical, financial and political. Newly hired students living in connected dormitories have come to love the internet just as much as their employers. It offered access to a much larger set of users than any of the individual commercial services (again, Metcalfe's law). After paying for a month's Internet access, almost everything else you could get for free, in contrast to the significant cost of paying for used hours or sent messages that CompuServe and other similar services asked for. Early Internet marketers included mail-order companies such as The Corner Store of Litchfield, Connecticut, advertised on Usenet groups, and The Online Bookstore, an e-book store founded by a former Little, Brown and Company editor and more than ten years ahead of the Kindle.

And then came the third wave of growth, bringing in ordinary consumers who began to go online in large numbers in the mid-1990s. By this time, Metcalfe's law was already working in top gear. Increasingly, being online meant being on the internet. Consumers could not afford to run T1 class leased lines home, so they almost always accessed the Internet via dialup modem. We've already seen part of this story where commercial BBSs gradually evolved into ISPs. This change has benefited both the users (whose digital pool has suddenly grown to an ocean) and the BBS itself, moving into the much simpler business of intermediary between the telephone system and the T1-bandwidth “trip to the backbone” of the Internet, without the need to maintain their own services.

Larger online services have evolved along the same lines. By 1993, all the nationwide services in the US—Prodigy, CompuServe, GEnie, and the fledgling America Online (AOL)—offered their 3,5 million users the ability to send email to Internet addresses. And only lagging behind Delphi (with 100 subscribers) offered a full-fledged access to the Internet. However, over the next few years, the value of access to the Internet, which continued to grow at an exponential rate, quickly outweighed access to its own forums, games, shops, and other content of the commercial services themselves. 000 was a watershed year - by October, 1996% of internet users were using the WWW, up from 73% the year before. A new term "portal" was coined to describe the rudimentary remnants of services provided by AOL, Prodigy and other companies that people paid money just to get access to the Internet.

Secret ingredient

So, we got a rough idea of ​​how the Internet grew at such an explosive pace, but did not understand enough why this happened. Why did it become so dominant in the presence of such a variety of many other services that tried to grow into the previous era of fragmentation?

Of course, government subsidies have played their part. Aside from backbone funding, when NSF decided to invest heavily in the development of the network, independent of its supercomputing program, it didn't skimp. The conceptual leaders of the NSFNET program, Steve Wolfe and Jane Cavines, decided to build not just a network of supercomputers, but a new information infrastructure for American colleges and universities. So they set up the Connections program, which took part of the costs of connecting universities to the network in exchange for providing as many people as possible with access to the network on their campuses. This accelerated the spread of the Internet both directly and indirectly. Indirectly, because many of the regional networks spawned commercial enterprises that used the same subsidized infrastructure to sell Internet access to commercial organizations.

But Minitel also had subsidies. What distinguished the Internet most of all, however, was its layered, decentralized structure and inherent flexibility. IP allowed networks with completely different physical properties to work with the same address system, and TCP ensured the delivery of packets to the recipient. And that's it. The simplicity of the basic scheme of the network made it possible to build on it almost any application. Importantly, any user could contribute new functionality if they could convince others to use their program. For example, file transfer via FTP was one of the most popular ways to use the Internet in the early years, but it was impossible to find servers that offered files of interest to you, except through word of mouth. Therefore, enterprising users have created various protocols for cataloging and maintaining lists of FTP servers - for example, Gopher, Archie and Veronica.

Theoretically, at network model OSI there was the same flexibility, as well as the official blessing of international organizations and telecommunications giants for the role of the internetworking standard. However, in practice, the field remained with TCP / IP, and its decisive advantage was the code that runs first on thousands, and then millions of machines.

The transfer of control of the application layer to the very fringes of the network led to another important consequence. This meant that large organizations, accustomed to managing their own field of activity, could feel comfortable. Organizations could set up their own email servers and send and receive emails without having all of their content stored on someone else's computer. They could register their own domain names, set up their own websites with access for anyone accessing the Internet, but leave them completely under their control.

Naturally, the most striking example of a multi-layered structure and decentralization has become the global web. For two decades, systems from time-sharing computers of the 1960s to services like CompuServe and Minitel have revolved around a small set of basic information-sharing services—email, forums, and chats. The web has become something fundamentally new. The web's early days, when it was entirely made up of unique, hand-crafted pages, bear little resemblance to its current state. However, link-to-link hopping already had a strange allure, and enabled businesses to provide extremely cheap advertising and customer support. None of the architects of the Internet planned the appearance of the web. It was the fruit of the work of Tim Berners-Lee, a British engineer at the European Center for Nuclear Research (CERN), who created it in 1990 with the aim of conveniently disseminating information between laboratory researchers. However, he lived comfortably on TCP/IP and used a domain name system designed for other purposes for ubiquitous URLs. Anyone with internet access could make a website, and by the mid-90s, it seemed like everyone had done just that—town halls, local newspapers, small businesses, and hobbyists of all stripes.

Privatization

In this account of the rise of the Internet, I have omitted a few important events, and you may be left with a few questions. For example, how exactly did businesses and consumers gain access to the Internet, which was originally centered around NSFNET, a network funded by the US government and ostensibly intended to serve the scientific research community? To answer this question, in the next article we will return to some important events that I have not mentioned so far; events that gradually but inevitably turned the state scientific Internet into a private and commercial one.

What else to read

  • Janet Abatte, Inventing the Internet (1999)
  • Karen D. Fraser “NSFNET: A Partnership for High-Speed ​​Networking, Final Report” (1996)
  • John S. Quarterman, The Matrix (1990)
  • Peter H. Salus, Casting the Net (1995)

Source: habr.com

Add a comment