The History of the Internet: Discovering Interactivity

The History of the Internet: Discovering Interactivity

Other articles in the series:

The very first electronic computers were unique devices created for research purposes. But once they hit the market, organizations quickly incorporated them into their existing data culture—in which all data and processes were stacked. punched cards.

Herman Hollerith developed the first tabulator capable of reading and counting data from holes in paper cards for the US Census in the late 0th century. By the middle of the next century, a very colorful menagerie of the descendants of this machine had penetrated large enterprises and government organizations around the world. Their common language was a multi-column card, where each column (usually) represented a single digit that could be punched into one of ten positions representing numbers from 9 to XNUMX.

No complicated devices were required to punch the input data into the cards, and the process could be spread across multiple offices within the organization that generated the data. When data needed to be processed—for example, to calculate revenue for a quarterly sales report—the appropriate cards could be brought into the data center and queued for processing by suitable machines that produced a set of output data on the cards or printed it on paper. Around the central processing machines - tabulators and calculators - crowded peripheral devices for punching, copying, sorting and interpreting cards.

The History of the Internet: Discovering Interactivity
The IBM 285 tabulator, a popular punch card device in the 1930s and 40s.

By the second half of the 1950s, virtually all computers operated in this "batch processing" scheme. From the point of view of a typical sales end user, little has changed. You brought a stack of punched cards for processing and received a printout or another stack of punched cards as a result of work. And in the process, the cards turned from holes in paper to electronic signals and back, but you didn’t care much. IBM dominated the punch card processing industry, and has remained one of the dominant forces in electronic computing, due in large part to its established connections and wide range of peripheral equipment. They simply replaced their customers' mechanical tabulators and calculators with faster, more flexible data-processing machines.

The History of the Internet: Discovering Interactivity
IBM 704 punch card processing kit. In the foreground, a girl works with a reader.

This punched card processing system worked great for decades and did not decline - quite the contrary. And yet, in the late 1950s, a fringe subculture of computer researchers began to argue that this entire workflow needed to be changed—they argued that the computer was best used interactively. Instead of leaving him a task and then coming back for results, the user must communicate directly with the machine and use its capabilities on demand. In Capital, Marx described how industrial machines—that people simply run—replaced tools that people controlled directly. However, computers began to exist already in the form of machines. It was only later that some of their users converted them into tools.

And this redesign did not take place in data centers like the US Census Bureau, the insurance company MetLife, or the United States Steel Corporation (all of these companies were among the first to buy UNIVAC, one of the first commercially available computers). It is unlikely that an organization that considers a weekly paycheck to be the most efficient and reliable way would want someone to violate this processing by playing with a computer. The value of being able to sit down at a console and just try something out on a computer was more clear to scientists and engineers who wanted to study a problem, approach it from various angles until its weak spot was found, and switch quickly between thinking and acting.

Therefore, such ideas originated among researchers. However, the money to pay for such wasteful use of the computer did not come from their department heads. A new subculture (one might even say a cult) of interactive computing was born out of a productive partnership between the military and elite universities in the United States. This mutually beneficial cooperation began during World War II. Atomic weapons, radars, and other magical weapons have taught the military leadership that the seemingly obscure pursuits of scientists can be of incredible importance to the military. This comfortable interaction existed for about a generation, and then fell apart in the political vicissitudes of another war, in Vietnam. But at this time, American scientists had access to huge amounts of money, almost no one touched them, and they could do almost everything that could even remotely be connected with national defense.

The justification for interactive computers began with the bomb.

Whirlwind and SAGE

On August 29, 1949, the Soviet research team successfully conducted first nuclear test on Semipalatinsk test site. Three days later, a US spy plane flying over the North Pacific discovered traces of radioactive material in the atmosphere from that test. The USSR had a bomb and their American rivals found out about it. Tensions between the two superpowers have persisted for more than a year since the Soviet Union cut off land routes to Western-controlled parts of Berlin in response to plans to restore Germany to its former economic greatness.

The blockade ended in the spring of 1949, caught in a stalemate by a massive operation by the West to support the city from the air. The tension eased somewhat. However, American generals could not ignore the existence of a potentially hostile force with access to nuclear weapons, especially given the ever-increasing size and range of strategic bombers. The United States had a chain of radar stations for detecting aircraft, established on the shores of the Atlantic and Pacific during World War II. However, they used outdated technology, did not cover the northern approaches across Canada, and were not linked by a central system for air defense coordination.

To rectify the situation, the Air Force (an independent US military branch since 1947) convened the Air Defense Engineering Committee (ADSEC). It is remembered in history as the "Vally committee", after the chairman, George Valley. He was an MIT physicist and veteran of the Rad Lab military radar research group, which became the Electronics Research Laboratory (RLE) after the war. The committee studied the problem for a year, and Valli issued a final report in October 1950.

One might assume that such a report would turn out to be a boring hodgepodge of paperwork, ending with a carefully worded and conservative proposal. Instead, the report turned out to be a most interesting piece of creative reasoning, and contained a radical and risky plan of action. This is an obvious merit of another professor from MIT, Norbert Wiener, who argued that the study of living beings and machines can be combined into a single discipline cybernetics. Valli and his co-authors began with the assumption that the air defense system is a living organism, and not metaphorically, but in fact. Radar stations serve as sense organs, interceptors and missiles are effectors with which he interacts with the world. They work under the control of the director, who uses information from the senses to make decisions about the necessary actions. They further argued that an all-human director would not be able to stop hundreds of incoming aircraft over millions of square kilometers in a matter of minutes, so as many of the director's functions as possible needed to be automated.

Their most unusual conclusion is that the best way to automate the director would be through digital electronic computers that can take over some of the human decisions: analyzing incoming threats, directing weapons against those threats (calculating intercept courses and relaying them to fighters), and perhaps even developing a strategy for optimal response forms. Then it was not at all obvious that computers were suitable for such a purpose. There were exactly three working electronic computers in the entire United States at that time, and none of them even came close to meeting the reliability requirements for a military system on which millions of lives depend. They were just very fast and programmable number processors.

However, Valli had reason to believe that a real-time digital computer was possible, since he knew about the project. Whirlwind ["Vortex"]. It began during the war in the MIT Servomechanics Laboratory under the direction of a young graduate student, Jay Forrester. His original goal was to create a general purpose flight simulator that could be reconfigured to support new aircraft models without having to rebuild from scratch each time. A colleague convinced Forrester that his simulator should use digital electronics to process inputs from the pilot and provide output states for the instruments. Gradually, the attempt to create a high-speed digital computer outgrew and eclipsed the original goal. The flight simulator was forgotten, and the war that gave rise to its development was long over, and the committee of inspectors from the Office of Naval Research (ONR) gradually became disillusioned with the project due to the ever-growing budget and the constantly pushed back date. In 1950, the ONR cut Forrester's budget critically for the following year, intending to shut down the project entirely thereafter.

However, for George Valli Whirlwind was a revelation. The actual Whirlwind computer was still far from working. However, after that, a computer was supposed to appear, which is not just a mind without a body. It is a computer with sense organs and effectors. Organism. Forrester was already considering plans to expand the project into the country's main military command and control center system. To the ONR computer experts, who considered computers only useful for solving mathematical problems, this approach seemed grandiose and absurd. However, Valli was looking for just such an idea, and he appeared just in time to save Whirlwind from oblivion.

Despite (or perhaps because of) great ambitions, Wally's report convinced the Air Force, and they launched an extensive new research and development program to first understand how to create a digital computer-based air defense system and then actually build it. The Air Force began collaborating with MIT to conduct core research—a natural choice given the Institute's presence of Whirlwind and RLE, as well as a history of successful air defense cooperation dating back to the Rad Lab and World War II. They called the new initiative the Lincoln Project, and built a new Lincoln Research Laboratory at Hanscom Field, 25 kilometers northwest of Cambridge.

The Air Force called the computerized air defense project SAGE is a typical strange acronym for a military project, meaning "semi-automatic ground encirclement". The Whirlwind was to be a proof-of-concept test machine before going into full-scale production and implementation of the hardware, a responsibility given to IBM. The working version of the Whirlwind computer, which was supposed to be made by IBM, received the much less memorable name AN/FSQ-7 (“Army-Navy Fixed Special Equipment” - SAGE looks pretty accurate by comparison).

By the time the Air Force drew up the full plans for the SAGE system in 1954, it consisted of various radar installations, air bases, air defense weapons - all controlled from twenty-three control centers, massive bunkers designed to withstand bombardment. To fill these centers, IBM would have needed forty-six computers, not twenty-three, which would have cost the military many billions of dollars. This is because the company still used vacuum tubes in logic circuits, and they burned out like incandescent bulbs. Any one of the tens of thousands of lamps in a running computer could fail at any moment. It would obviously be unacceptable to leave an entire sector of the country's airspace unprotected while the technicians were making repairs, so a spare car had to be kept on hand.

The History of the Internet: Discovering Interactivity
SAGE control center at Grand Forks Air Force Base in North Dakota, where two AN / FSQ-7 computers were located

Each control center had dozens of operators seated in front of cathode ray screens, each monitoring a portion of a sector of airspace.

The History of the Internet: Discovering Interactivity

The computer tracked any potential aerial threats and drew them as footprints on the screen. The operator could use a light gun to display additional information on the trail and issue commands to the defense system, and the computer turned them into a printed message for an available missile battery or air force base.

The History of the Internet: Discovering Interactivity

Virus of interactivity

Given the nature of the SAGE system—the direct interaction of human operators and a digital computer with a CRT in real time, using light guns and a console—it is not surprising that the first cohort of proponents of interactive computer interaction was nurtured in the Lincoln Lab. The entire computer culture of the laboratory existed in an isolated bubble, cut off from the batch processing norms that developed in the commercial world. The researchers used the Whirlwind and its descendants by reserving periods of time for which they had exclusive access to the computer. They are accustomed to using their hands, eyes, and ears to communicate directly through switches, keyboards, brightly lit screens, and even a speaker, without paper intermediaries.

This strange and small subculture spread to the outside world like a virus through direct physical contact. And if we consider it a virus, then patient zero should be called a young man named Wesley Clark. Clarke retired from graduate school in physics at Berkeley in 1949 to become a technician at a nuclear weapons plant. However, he didn't like the job. After reading a few articles from computer magazines, he began to look for an opportunity to penetrate what seemed to be a new and interesting area, full of untapped potential. He learned about the recruitment of computer specialists at the Lincoln Laboratory from an advertisement, and in 1951 he moved to the east coast to get a job under Forrester, who had already become head of the Digital Computer Laboratory.

The History of the Internet: Discovering Interactivity
Wesley Clark demonstrating his LINC biomedical computer, 1962

Clarke joined the Advanced Development Group, a subsection of the laboratory that epitomized the relaxed state of military-university collaboration of the day. Although technically part of the Lincoln Lab universe, this team existed in a bubble within another bubble, isolated from the day-to-day needs of the SAGE project, and free to choose any computer direction that could somehow be tied to air defense. Their main goal in the early 1950s was to create the Memory Test Computer (MTC), designed to demonstrate the viability of a new, highly efficient and reliable method of storing digital information, magnetic core memory, which will replace the whimsical CRT-based memory used in the Whirlwind.

Since MTC had no other users than its creators, Clark had full access to the computer for many hours a day. Clark became interested in the then fashionable cybernetic mixture of physics, physiology and information theory, thanks to his colleague Belmont Farley, who was talking to a group of biophysicists from the RLE in Cambridge. Clarke and Farley spent long hours at MTC building neural network software models to study the properties of self-organizing systems. From these experiments, Clarke began to derive certain axiomatic principles of computing from which he never deviated. In particular, he came to believe that "user experience is the most important design factor."

In 1955, Clark teamed up with Ken Olsen, one of MTC's developers, to plan a new computer that could pave the way for the next generation of military control systems. By using very large magnetic core memory for storage, and transistors for logic, it could be made much more compact, reliable, and powerful than the Whirlwind. Initially, they proposed a design that they called the TX-1 (Transistorized and eXperimental computer, "experimental transistor computer" - much clearer than AN / FSQ-7). However, the leadership of the Lincoln Laboratory rejected the project as too expensive and risky. Transistors had only been on the market a few years before, and very few computers were built around transistor logic. So Clark and Olsen returned with a smaller version of the car, the TX-0, which was approved.

The History of the Internet: Discovering Interactivity
TX-0

The functionality of the TX-0 computer as a tool for managing military bases, although an excuse for its creation, was much less interesting to Clark than the ability to promote his ideas on computer design. From his point of view, the interactivity of computers has ceased to be a fact of life in Lincoln Labs and has become the new norm - the right way to create and use computers, especially for scientific work. He gave MIT biophysicists access to the TX-0, although their work had nothing to do with air defense, and allowed them to use the machine's visual display to analyze electroencephalograms from sleep studies. And no one objected to it.

The TX-0 was successful enough that in 1956 the Lincoln Laboratories approved the TX-2, a full-scale transistorized computer with a huge memory of two million bits. The project will take two years to complete. After that, the virus will break out of the laboratory. With the completion of the TX-2, the labs would no longer need to use the early prototype, so they agreed to loan the TX-0 to Cambridge to RLE. It was installed on the second floor, above the batch computing center. And it immediately infected computers and professors on the MIT campus, who began to fight for time periods in which they could take full control of the computer.

It was already clear that it was almost impossible to write a computer program correctly the first time. Moreover, researchers studying a new task often did not understand at first what the correct behavior should be. And to get the results from the computer center, you had to wait for hours, or even until the next day. For dozens of new campus programmers, being able to climb the ladder, find a bug and fix it right away, try a new approach and see improved results right away, has been a revelation. Some used their time on the TX-0 to work on serious science or engineering projects, but the joy of interactivity attracted more playful souls as well. One student wrote a text-editing program that he called "an expensive typewriter." Another followed suit and wrote an "expensive desktop calculator" that he used to do his homework on numerical analysis.

The History of the Internet: Discovering Interactivity
Ivan Sutherland demonstrates his Sketchpad software on a TX-2

Meanwhile, Ken Olsen and another TX-0 engineer, Harlan Anderson, irritated by the slow progress of the TX-2 project, decided to bring a small-scale interactive computer to the market for scientists and engineers. They left the lab to found the Digital Equipment Corporation, set up an office in a former textile factory on the Assabet River, ten miles west of Lincoln. Their first computer, the PDP-1 (released in 1961), was essentially a clone of the TX-0.

TX-0 and Digital Equipment Corporation began spreading the good news about a new way to use computers outside of the Lincoln Lab. And yet, so far, the interactivity virus has been localized geographically, in eastern Massachusetts. But that, too, was soon to change.

What else to read:

  • Lars Heide, Punched-Card Systems and the Early Information Explosion, 1880-1945 (2009)
  • Joseph November, Biomedical Computing (2012)
  • Kent C. Redmond and Thomas M. Smith, From Whirlwind to MITER (2000)
  • M. Mitchell Waldrop, The Dream Machine (2001)

Source: habr.com

Add a comment