Calling into deep space: how NASA accelerates interplanetary communication

“There is almost nowhere to improve the technology that works at radio frequencies. Easy Solutions End"

On November 26, 2018 at 22:53 pm Moscow time, NASA succeeded again - the InSight probe successfully landed on the surface of Mars after reentry, descent and landing maneuvers, which were later dubbed "six and a half minutes of horror." An apt description, because NASA engineers could not immediately know if the space probe had successfully landed on the surface of the planet, due to the time delay in communications between Earth and Mars, which was approximately 8,1 minutes. During this window, InSight could not rely on its more modern and powerful antennas - everything depended on old-fashioned UHF communications (this method has long been used in everything from TV broadcasts and walkie-talkies to Bluetooh devices).

As a result, critical data on the state of InSight were transmitted on radio waves with a frequency of 401,586 MHz to two satellites -Cubsata, WALL-E and EVE, which then transmitted data at a speed of 8 Kbps to 70-meter antennas located on Earth. The Cubesats were launched on the same rocket as InSight, and they accompanied it on its journey to Mars to observe the landing and transmit data back home immediately. Other orbiting Martian ships, such as Martian reconnaissance satellite (MRS), were in an uncomfortable position and could not at first provide real-time messaging with the lander. Not to say that the entire landing depended on two experimental suitcase-sized Cubesats each, but the MRS would only be able to transmit data from InSight after an even longer wait.

The InSight landing actually put NASA's entire communications architecture, "the Mars Network", to the test. The signal from the InSight lander, transmitted to orbiting satellites, would have reached Earth anyway, even if the satellites failed. WALL-E and EVE were needed for instant information transfer, and they did it. If these Cubsats did not work for some reason, MRS was ready to play their role. Each of them acted as a node on an Internet-like network, routing data packets through different terminals made up of different equipment. Today, the most efficient of them is the MRS, capable of transmitting data at speeds up to 6 Mbps (and this is the current record for interplanetary missions). However, NASA has had to operate at much slower speeds in the past - and will need much faster data transfer in the future.

Calling into deep space: how NASA accelerates interplanetary communication
Like your ISP, NASA allows Internet users to check communication with spacecraft in real time.

Deep Space Network

With the increasing presence of NASA in space, improved communication systems are constantly appearing, covering more and more space: first it was low Earth orbit, then geosynchronous orbit and the Moon, and soon communications went deeper into space. It all started with a crude handheld radio that used US military bases in Nigeria, Singapore, and California to receive telemetry from Explorer 1, the first satellite successfully launched by the Americans in 1958. Slowly but surely, this basis has evolved into today's advanced messaging systems.

Douglas Abraham, head of strategic and systems forecasting at NASA's Interplanetary Network Directorate, highlights three independently developed networks for messaging in space. The Near Earth Network operates with spacecraft in low Earth orbit. “It's a set of antennas, mostly 9m to 12m. There are a few large ones, 15m to 18m,” says Abraham. Then, above the geosynchronous orbit of the Earth, there are several tracking and data satellites (TDRS). “They can look down at satellites in low Earth orbit and communicate with them, and then transmit this information via TDRS to the ground,” explains Abraham. “This satellite data transmission system is called the NASA space network.”

But even TDRS was not enough to communicate with a spacecraft that went far beyond the orbit of the Moon to other planets. “So we had to create a network covering the entire solar system. And this is the Deep Space Network, DSN,” says Abraham. The Martian Network is an extension DSN.

Given the extent and plans, DSN is the most complex of the listed systems. In fact, this is a set of large antennas, from 34 to 70 m in diameter. Each of the three DSN sites has several 34m antennas and one 70m antenna. One site is located in Goldstone (California), another near Madrid (Spain), and the third in Canberra (Australia). These sites are located approximately 120 degrees apart around the globe, and provide XNUMX/XNUMX coverage for all spacecraft outside of geosynchronous orbit.

34m antennas are DSN's core equipment and come in two varieties: old high efficiency antennas and relatively new waveguide antennas. The difference is that the waveguide antenna has five precise RF mirrors that reflect the signals down a pipe to an underground control room, where the electronics that analyze those signals are better protected from all sources of interference. The 34-meter antennas, working individually or in groups of 2-3 dishes, can provide most of the communication needed by NASA. But for special cases where distances become too long for even a few 34m antennas, DSN management uses 70m monsters.

“They play an important role in several cases,” Abraham says of large antennas. The first is when the spacecraft is so far from the Earth that it will be impossible to establish communication with it using a smaller dish. “Good examples would be the New Horizons mission, which has already flown far beyond Pluto, or the Voyager spacecraft, which is outside the solar system. Only 70-meter antennas are able to get through to them and deliver their data to Earth, ”explains Abraham.

The 70-meter dishes are also used when the spacecraft is unable to operate the booster antenna, either due to a planned critical situation such as an orbital entry, or because something goes very wrong. The 70-meter antenna, for example, was used to safely return Apollo 13 to Earth. She also adopted Neil Armstrong's famous line, "A small step for man, a giant step for mankind." And even today, DSN remains the most advanced and sensitive communication system in the world. “But for many reasons, it has already reached its limit,” warns Abraham. “There is almost nowhere to improve technology that works at radio frequencies. Simple solutions are running out."

Calling into deep space: how NASA accelerates interplanetary communication
Three ground stations 120 degrees apart

Calling into deep space: how NASA accelerates interplanetary communication
DSN plates in Canberra

Calling into deep space: how NASA accelerates interplanetary communication
DSN complex in Madrid

Calling into deep space: how NASA accelerates interplanetary communication
DSN in Goldstone

Calling into deep space: how NASA accelerates interplanetary communication
Control room at the Jet Propulsion Laboratory

Radio and what comes after it

This story is not new. The history of deep space communications consists of a constant struggle to increase frequencies and shorten wavelengths. Explorer 1 used frequencies of 108 MHz. NASA then introduced larger, better-gained antennas that supported frequencies from the L-band, from 1 to 2 GHz. Then came the turn of the S-band, with frequencies from 2 to 4 GHz, and then the agency switched to the X-band, with frequencies of 7-11,2 GHz.

Today, space communications systems are undergoing changes again - now they are moving to the 26-40 GHz band, the Ka-band. “The reason for this trend is that the shorter the wavelengths and the higher the frequencies, the more data rates you can get,” says Abraham.

There are reasons for optimism, given that historically the speed of communication development at NASA has been quite high. A 2014 research paper from the Jet Propulsion Laboratory cites the following throughput data for comparison: If we used Explorer 1's communications technologies to transfer a typical iPhone photo from Jupiter to Earth, it would take 460 times longer than current age Universe. Pioneers 2 and 4 from the 1960s would have taken 633 years. Mariner 000 from 9 would have done it in 1971 hours. Today it will take the MPC three minutes.

The only problem, of course, is that the amount of data received by spacecraft is growing just as fast, if not faster than the growth in transmission capabilities. Over 40 years of operation, Voyagers 1 and 2 produced 5 TB of information. The NISAR Earth Science satellite, scheduled for launch in 2020, will produce 85 TB of data per month. And if Earth's satellites are quite capable of doing this, transferring such a volume of data between planets is a completely different story. Even a relatively fast MRS will transmit 85 TB of data to Earth for 20 years.

“Estimated data transfer rates for exploration of Mars in the late 2020s and early 2030s will be 150 Mbps or higher, so let's do the math,” says Abraham. – If an MPC-class spacecraft at the maximum distance from us to Mars can send about 1 Mbps to a 70-meter antenna on Earth, then an array of 150 150-meter antennas would be required to establish communication at a speed of 70 Mbps. Yes, of course, we can come up with clever ways to slightly reduce this absurd amount, but the problem obviously exists: organizing interplanetary communication at a speed of 150 Mbps is extremely difficult. In addition, we are running out of the spectrum of allowed frequencies.”

As Abraham demonstrates, operating on the S or X band, a single mission with a capacity of 25 Mbps will occupy the entire available spectrum. There is more space in Ka-band, but only two satellites of Mars with a bandwidth of 150 Mbps will occupy the entire spectrum. Simply put, the interplanetary internet will require more than just radio to operate – it will rely on lasers.

The advent of optical communications

Lasers sound futuristic, but the idea of ​​optical communications can be traced back to a patent filed by Alexander Graham Bell in the 1880s. Bell developed a system in which sunlight, focused to a very narrow beam, was directed onto a reflective diaphragm that vibrated due to sounds. The vibrations caused variations in the light passing through the lens into the crude photodetector. Changes in the resistance of the photodetector changed the current flowing through the phone.

The system was unstable, the volume was very low, and Bell eventually abandoned this idea. But nearly 100 years later, armed with lasers and fiber optics, NASA engineers are back to that old concept.

“We knew about the limitations of radio frequency systems, so in the late 1970s, early 1980s, at the Jet Propulsion Laboratory, they began to discuss the possibility of transmitting messages from deep space using space lasers,” Abraham said. To better understand what is and is not possible in deep space optical communications, the lab commissioned a four-year study, the Deep Space Relay Satellite System (DSRSS), in the late 1980s. The study was supposed to answer critical questions: what about the weather and visibility problems (after all, radio waves can easily pass through clouds, while lasers cannot)? What if the Sun-Earth-probe angle becomes too sharp? Will a detector on Earth distinguish a weak optical signal from sunlight? And finally, how much will all this cost and will it be worth it? “We are still looking for answers to these questions,” Abraham admits. “However, the responses are increasingly confirming the possibility of optical data transmission.”

The DSRSS suggested that a point above the Earth's atmosphere would be best suited for optical and radio communications. It was claimed that the optical communications system installed on the orbital station would work better than any terrestrial architecture, including the iconic 70-meter antennas. It was supposed to deploy a 10-meter dish in near-Earth orbit, and then raise it to geosynchronous. However, the cost of such a system - consisting of a satellite with a dish, a launch rocket and five user terminals - was prohibitive. Moreover, the study did not even include the cost of the necessary auxiliary system, which would come into operation in the event of a satellite failure.

As this system, the Lab started looking at the ground architecture described in the Ground Based Advanced Technology Study (GBATS) conducted at the Lab around the same time as DRSS. The people who worked on GBATS came up with two alternative proposals. The first is the installation of six stations with 10-meter antennas and meter spare antennas, located 60 degrees apart from each other around the equator. Stations had to be built on mountain peaks, where at least 66% of the days of the year were clear. Thus, 2-3 stations will always be visible to any spacecraft, and they will have different weather. The second option is nine stations, grouped in groups of three, and located 120 degrees from each other. Stations within each group should be located 200 km apart so that they are in line of sight, but in different weather cells.

Both GBATS architectures were cheaper than the space approach, but they also had problems. First, because the signals had to pass through the Earth's atmosphere, daytime reception would be much worse than nighttime reception due to the illuminated sky. Despite the clever arrangement, ground-based optical stations will depend on the weather. A spacecraft aiming a laser at a ground station will eventually have to adapt to bad weather conditions and re-establish communication with another station that is not obscured by clouds.

However, regardless of the problems, the DSRSS and GBATS projects laid the theoretical foundation for deep space optical systems and modern developments of engineers at NASA. It remained only to build such a system and demonstrate its performance. Luckily, that was only a few months away.

Project implementation

By that time, optical data transmission in space had already taken place. The first test was done in 1992 when the Galileo probe was heading for Jupiter and swerved its high-resolution camera toward Earth to successfully receive a set of laser pulses from the 60 cm Table Mountain Observatory Telescope and the 1,5 m USAF Starfire Optical Telescope. Range in New Mexico. At that moment, Galileo was 1,4 million km from the Earth, but both laser beams hit his camera.

The Japanese and European Space Agencies have also been able to establish optical communications between ground stations and satellites in Earth's orbit. They were then able to establish a 50 Mbps connection between the two satellites. A few years ago, a German team established a 5,6 Gbps coherent bi-directional optical link between an NFIRE satellite in Earth orbit and a ground station in Tenerife, Spain. But all of these cases were associated with near-Earth orbit.

The very first optical link connecting a ground station and a spacecraft in orbit around another planet in the solar system was installed in January 2013. A 152 x 200 pixel black and white image of the Mona Lisa was transmitted from the Next Generation Satellite Laser Range Station at NASA's Goddard Space Flight Center to the Lunar Reconnaissance Orbiter (LRO) at 300 bps. Communication was one-way. LRO sent the image received from Earth back via conventional radio. The image needed a little software error correction, but even without this encoding it was easy to recognize. And at that time, the launch of a more powerful system to the Moon was already planned.

Calling into deep space: how NASA accelerates interplanetary communication
From the Lunar Reconnaissance Orbiter project in 2013: To clean up transmission errors introduced by the Earth's atmosphere (left), scientists at the Goddard Space Flight Center applied Reed-Solomon error correction (right), which is heavily used in CDs and DVDs. Typical errors include missing pixels (white) and false signals (black). A white bar indicates a slight pause in transmission.

«Researcher of the lunar atmosphere and dust environment» (LADEE) entered the orbit of the moon on October 6, 2013, and just a week later launched its pulsed laser for data transmission. This time, NASA tried to organize two-way communication at a speed of 20 Mbps in that direction and a record speed of 622 Mbps in the opposite direction. The only problem was the short lifetime of the mission. Optical communication LRO worked for only a few minutes. LADEE communicated with his laser for 16 hours for a total of 30 days. This situation should change when the Laser Communications Demonstration Satellite (LCRD) is launched, scheduled for June 2019. Its task is to show how future communication systems in space will work.

LCRD is being developed at NASA's Jet Propulsion Laboratory in collaboration with Lincoln Laboratory at MIT. It will have two optical terminals: one for communication in low Earth orbit, the other for deep space. The first will have to use differential phase shift keying (DPSK). The transmitter will send laser pulses at a frequency of 2,88 GHz. Using this technology, each bit will be encoded by the phase difference of successive pulses. It will be able to operate at 2,88 Gbps, but it will require a lot of power. Detectors are only capable of detecting pulse differences in high-energy signals, so DPSK works great with near-Earth communications, but it's not the best method for deep space, where energy storage is problematic. A signal sent from Mars will lose energy before it reaches Earth, so LCRD will use a more efficient technology, pulse-phase modulation, to demonstrate optical communication with deep space.

Calling into deep space: how NASA accelerates interplanetary communication
NASA engineers prepare LADEE for testing

Calling into deep space: how NASA accelerates interplanetary communication
In 2017, engineers tested flight modems in a thermal vacuum chamber

“Essentially, it’s counting photons,” explains Abraham. – The short period allocated for communication is divided into several time segments. To get the data, you just need to check if the photons at each of the gaps collided with the detector. This is how the data is encoded in the FIM.” It's like Morse code, only at super-fast speed. Either there is a flash at a certain moment, or there is not, and the message is encoded by a sequence of flashes. “While this is much slower than DPSK, we can still establish optical communications at speeds of tens or hundreds of Mbps at a distance to Mars,” adds Abraham.

Of course, the LCRD project is not only about these two terminals. It should also work as an Internet node in space. On the ground, there will be three stations operating LCRD: one in White Sands in New Mexico, one in Table Mountain in California, and one on the island of Hawaii or Maui. The idea is to test the switchover from one ground station to another in case of bad weather at one of the stations. The mission will also test the operation of the LCRD as a data transmitter. The optical signal from one of the stations will go to the satellite and then be transmitted to another station - and all this via optical communication.

If it is not possible to transfer the data immediately, the LCRD will store it and transfer it when it is possible. If the data is urgent, or there is not enough storage space on board, the LCRD will send it immediately via its Ka-band antenna. So, the precursor to future transmitter satellites, LCRD will be a hybrid radio-optical system. This is exactly the kind of unit that NASA needs to place in orbit around Mars in order to organize an interplanetary network that supports human exploration of deep space in the 2030s.

Bringing Mars online

Over the past year, Abraham's team has written two papers describing the future of deep space communications, which will be presented at the SpaceOps conference in France in May 2019. One describes deep space communications in general, the other (“Mars interplanetary network for the era of human exploration - potential problems and solutions“) offered a detailed description of the infrastructure capable of providing an Internet-like service for astronauts on the Red Planet.

Estimates of peak average data transfer rates were around 215 Mbps for download and 28 Mbps for upload. The Martian Internet will consist of three networks: WiFi covering the research area on the surface, the planetary network transmitting data from the surface to Earth, and the terrestrial network, a deep space communications network with three sites responsible for receiving this data and sending responses back to Mars.

“When developing such an infrastructure, there are many problems. It must be reliable and stable, even at the maximum distance to Mars of 2,67 AU. during periods of superior solar conjunction, when Mars hides behind the Sun,” says Abraham. Such a conjunction occurs every two years and completely breaks communication with Mars. “Today we can’t deal with it. All the landing and orbital stations that are on Mars simply lose contact with the Earth for about two weeks. With optical communication, the loss of communication due to the solar connection will be even longer, 10 to 15 weeks.” For robots, such gaps are not particularly scary. Such isolation does not cause them problems, because they do not get bored, do not experience loneliness, they do not need to see their loved ones. But for humans, it's not like that at all.

“Therefore, we theoretically allow the commissioning of two orbital transmitters placed in a circular equatorial orbit 17300 km above the surface of Mars,” continues Abraham. According to the study, they should weigh 1500 kg each, carry a set of terminals operating in the X-band, Ka-band, and optical band, and be powered by solar panels with a capacity of 20-30 kW. They must support the Delay Tolerant Network Protocol—essentially TCP/IP, designed to handle the high delays that interplanetary networks will inevitably experience. The orbital stations participating in the network must be able to communicate with astronauts and vehicles on the surface of the planet, with ground stations and with each other.

“This crosstalk is very important because it reduces the number of antennas required to transmit data at 250 Mbps,” says Abraham. His team estimates that an array of six 250-meter antennas would be needed to receive 34 Mbps data from one of the orbiting transmitters. This means that NASA will need to build three additional antennas at the deep space communications sites, but these take years to build and are extremely expensive. “But we think that two orbital stations can share data between themselves and send it at the same time at a speed of 125 Mbps, where one transmitter will send one half of the data packet and the other one will send the other,” says Abraham. Even today, 34-meter deep space communications antennas can simultaneously receive data from four different spacecraft at once, resulting in the need for three antennas to complete the task. “It takes the same number of antennas to receive two 125 Mbps transmissions from the same area of ​​the sky as it takes to receive one transmission,” explains Abraham. “More antennas are needed only if you need to communicate at a higher speed.”

To deal with the problem of solar connectivity, Abraham's team proposed launching a transmitter satellite to the L4/L5 points of the Sun-Mars/Sun-Earth orbit. Then, during periods of connection, it can be used to transmit data around the Sun, instead of sending signals through it. Unfortunately, during this period, the speed will drop to 100 Kbps. Simply put, it will work, but sucks.

In the meantime, would-be astronauts on Mars will have to wait just over three minutes to receive a photo of a kitten, not counting delays that can be up to 40 minutes. Fortunately, by the time humanity's ambitions drive us even farther than the Red Planet, the interplanetary internet will already be working pretty well most of the time.

Source: habr.com

Add a comment