Starlink is a big deal

Starlink is a big deal
This article is from a series on educational program in the field of space technologies.

Starlink - SpaceX's plan to distribute the Internet through tens of thousands of satellites is the main topic in the space press. Articles about the latest achievements are published weekly. If, in general, the scheme is clear, but after reading reports to the Federal Communications Commission, a well-motivated person (say, yours truly) can dig up a lot of details. However, there are still many misconceptions associated with this new technology, even among enlightened observers. It's not uncommon to see articles comparing Starlink to OneWeb and Kuiper (among others) as if they were competing on equal terms. Other authors, clearly concerned about the good of the planet, cry out about space debris, space law, standards and safety of astronomy. I hope that after reading this - rather long - article, the reader will better understand and feel the idea of ​​Starlink.

Starlink is a big deal

Previous article unexpectedly touched a sensitive chord in the souls of my few readers. In it, I explained how Starship would put SpaceX in the lead for a long time and at the same time provide a mechanism for new space exploration. The implication is that the traditional satellite industry can't keep up with SpaceX, which is steadily ramping up capacity and cutting costs on the Falcon family of rockets, putting SpaceX in a difficult position. On the one hand, it formed a market worth, at best, several billion a year. On the other hand, it kindled in itself an irrepressible appetite for money - for the construction of a huge rocket, on which, however, there is almost no one to send to Mars, and no immediate profit can be expected.

The solution to this twin problem is Starlink. By assembling and launching its own satellites, SpaceX could create and define a new market for highly efficient and democratized access to space-based communications, secure funding to build a rocket before it drowns the company, and raise its economic value to trillions. Don't underestimate the scale of Elon's ambitions. In total, there are not so many industries where trillions of dollars are spinning: energy, high-speed transport, communications, IT, healthcare, agriculture, government, defense. Despite common misconceptions, space drilling, mining water on the moon ΠΈ space solar panels The business is not viable. Elon has invaded the energy industry with his Tesla, but only telecommunications will provide a reliable and capacious market for satellites and rocket launches.

Starlink is a big deal

For the first time, Elon Musk turned his eyes towards space when he wanted to donate $ 80 million to a mission to grow plants on a Martian probe. It would probably cost 100 times more to build a city on Mars, so Starlink is Musk's main bet to secure a sea of ​​much-needed sponsorship money. autonomous city on Mars.

For what?

I have been planning this article for a long time, but only last week I had a complete picture. Then SpaceX President Gwynne Shotwell gave Rob Baron an amazing interview, which he later covered for CNBC in a great Twitter thread Michael Schitz, and to whom they dedicated some articles. This interview showed a huge difference in the approaches to satellite communications between SpaceX and everyone else.

Concept Starlink was born in 2012, when SpaceX realized that their customers - mostly satellite providers - had huge reserves of money. Launch pads are jacking up prices for deploying satellites and in doing so, somehow, missing out on one step of the work - how come? Elon dreamed of creating a satellite constellation for the Internet and, unable to resist an almost impossible task, spun the process. Starlink development not without difficulties, but by the end of this article, you, my reader, will probably be surprised how small these difficulties really are, given the scope of the idea.

Is such a huge grouping really necessary for the Internet? And why now?

It is only in my memory that the Internet has evolved from purely academic pampering to the first and only revolutionary infrastructure. This is not a topic worth devoting to in an extended article, but I will assume that globally, the need for the Internet and the income that it generates will continue to grow at about 25% per year.

Today, almost all of us get the Internet from a small number of geographically isolated monopolies. In the US, AT&T, Time Warner, Comcast and a handful of smaller players have divided the territory to avoid competition, fight three skins for services and bathe in the rays of almost universal hatred.

ISPs have a good reason for non-competitive behavior, besides all-consuming greed. Building the infrastructure for the internetβ€”microwave cell towers and fiber opticsβ€”is very, very expensive. It's easy to forget about the wonderful nature of the Internet. My grandmother first went to work in World War II as a signalman, and then the telegraph then competed for the leading strategic role with carrier pigeons! For most of us, the information superhighway is something ephemeral, intangible, but bits travel through the physical world, which has borders, rivers, mountains, oceans, storms, natural disasters, and other obstacles. Back in 1996, when the first fiber-optic line was laid on the ocean floor, Neil Stevenson wrote a comprehensive essay on cybertourism. With his trademark sharp style, he vividly describes the bare cost and complexity of laying these lines, along which the damned β€œkotegs” then rush anyway. For most of the 2000s, cable was pulled so much that the cost of deployment was astonishing.

At one time I worked in an optical laboratory and (if memory serves) we broke the record of that time by issuing a multiplex transmission rate of 500 Gb / s. Electronic limitations allowed each fiber to be loaded by 0,1% of the theoretical bandwidth. Fifteen years later, we are ready to exceed the threshold: if the data transfer goes beyond it, the fiber will melt, and we are already very close to this.

But it is necessary to raise the data flow above the sinful earth - into space, where the satellite flies around the "ball" 30 times in five years. An obvious, it would seem, solution - so why didn’t anyone take it before?

The Iridium constellation of satellites, developed and deployed in the early 1990s by Motorola (still remember them?), became the first global low-orbit communications network (as temptingly described in this book). By the time it was deployed, the niche ability to route small data packets from asset trackers was its only use: cell phones were so cheap that satellite phones never came in. Iridium had 66 satellites (plus a few more spares) in 6 orbits - the minimum set to cover the entire planet.

If 66 satellites were enough for Iridium, then why did SpaceX need tens of thousands? Why is she so different?

SpaceX entered this business from the opposite end - it started with launches. Became a pioneer in the field of launch vehicle preservation and thus captured the market for cheap launch pads. Trying to outbid them with a lower price won't make much money, so the only way to profit from their excess capacity is to become a customer. SpaceX spending to launch its own satellites - one tenth of the cost (per 1 kg) Iridium, and therefore they are able to enter a much wider market.

Starlink's worldwide coverage will provide you with access to high-quality internet anywhere in the world. For the first time, the availability of the Internet will depend not on the proximity of a country or city to an optical fiber line, but on the purity of the sky above. Users around the world will have access to a global internet free of shackles, regardless of their own varying degrees of bad and/or dishonest government monopolies. Starlink's ability to break these monopolies is catalysing positive change of incredible magnitude that will finally unite billions of people into the global cybernetic community of the future.

A small lyrical digression: what does this even mean?

For people growing up today in an era of ubiquitous connectivity, the internet is like the air we breathe. He just is. But this - if you forget about his incredible power to bring positive changes - and we are already in their very center. With the help of the Internet, people can call their leaders to account, communicate with other people on the other side of the world, share thoughts, invent something new. The Internet unites humanity. The history of upgrades is the history of the evolution of data sharing capabilities. First, through speeches and epic poetry. Then - on a letter that gives voice to the dead, and they turn to the living; writing allows data to be stored and makes asynchronous communication possible. The print press has put news production on stream. Electronic communication - has accelerated the transfer of data around the world. Personal note-taking devices have gradually become more complex, evolving from notebooks to cell phones, each of which is an Internet-connected computer, stuffed with sensors and every day getting better at predicting our needs.

A person who uses writing and a computer in the process of cognition has a better chance of overcoming the limitations of an imperfectly developed brain. Even more encouraging, cell phones are both powerful storage devices and a mechanism for exchanging ideas. If earlier people, sharing thoughts, relied on the speech that they sketched in notebooks, today it is the norm if notebooks themselves share ideas that people have generated. The traditional scheme has undergone an inversion. The logical continuation of the process is some form of collective metacognition, through personal devices, even more tightly integrated into our brains and related to each other. And while we may still be nostalgic for our lost connection to nature and solitude, it is important to remember that technology, and technology alone, is responsible for the lion's share of our liberation from the "natural" cycles of ignorance, premature death (which can be avoided), violence, hunger, and tooth decay.

How?

Let's talk about the business model and architecture of the Starlink project.

For Starlink to become a profitable enterprise, the inflow of funds must exceed the costs of construction and operation. Traditionally, capital investment has involved increased start-up costs, the use of sophisticated specialized funding and insurance mechanisms, and everything to launch a satellite. A geostationary communications satellite can cost $500 million and take five years to build and launch. Therefore, companies in this area are simultaneously building jet ships or container ships. Huge spending, an inflow of funds that barely covers financing costs, and a relatively small operating budget. In contrast, the failure of the original Iridium was that Motorola forced the operator to pay a killer license fee, bankrupting the enterprise in just a few months.

To run such a business, traditional satellite companies had to serve private customers and charge high data rates. Airlines, remote outposts, ships, war zones, and key infrastructure sites pay around $5 per MB, which is 1 times the cost of traditional ADSL, despite data latency and relatively low satellite bandwidth.

Starlink plans to compete with terrestrial service providers, which means it will have to deliver data cheaper and, ideally, charge much less than $ 1 per 1 MB. Is it possible? Or, since this is possible, one should ask: how is this possible?

The first ingredient of the new dish is a cheap launch. Today, Falcon is selling a 24-tonne launch for about $60 million, which is $2500 per kg. It turns out, however, that there are much more internal costs. Starlink satellites will be launched on reusable launch vehicles, so the marginal cost of a single launch is the cost of a new second stage (somewhere around $1 million), fairings (4 million) and ground support (~1 million). Total: about 1 thousand dollars for a satellite, i.e. more than 100 times cheaper than launching a conventional communications satellite.

Most Starlink satellites, however, will be launched on Starship. Indeed, the evolution of Starlink, as updated reports to the FCC show, provides some an idea of ​​how, as the idea of ​​Starship was implemented, the internal architecture of the project. The total number of satellites in the constellation grew from 1 to 584, then to 2, and finally to 825. According to gross savings, the figure is even higher. The minimum number of satellites for the first phase of development for the project to be viable is 7 in 518 orbits (total 30), while full coverage within 000 degrees of the equator requires 60 orbits of 6 satellites (total 360). That's 53 launches for the Falcon for some $24 million in internal spending. Starship, on the other hand, is designed to launch up to 60 satellites at a time, for about the same price. Starlink satellites have to be replaced every 1440 years, so 24 satellites would require 150 Starship launches per year. It will cost some 400 million/year, or 5 thousand/satellite. Each Falcon satellite weighs 6000 kg; satellites lifted on Starship could weigh 15 kg and carry third-party devices, be somewhat larger and still not exceed the permissible load.

What is the cost of satellites? Among brethren, Starlink satellites are somewhat unusual. They are assembled, stored and launched flat and are therefore exceptionally easy to mass produce. As experience shows, the production cost should approximately equal the cost of the launcher. If the difference in price is large, it means that resources are not being allocated correctly, since the comprehensive reduction in marginal costs while reducing costs is not so great. Is it really 100 thousand dollars per satellite with the first batch of several hundred? In other words, is a Starlink satellite in a device no more complex than a machine?

To fully answer this question, you need to understand why the cost of an orbiting communications satellite is 1000 times higher, even if it is not 1000 times more complicated. To put it quite simply, why is space hardware so expensive? There are many reasons for this, but the most compelling in this case is this: if launching a satellite into orbit (before Falcon) costs more than 100 million, it must be guaranteed to work for many years - in order to bring at least some profit. To ensure such reliability in the operation of the first and only product is a painful process and can drag on for years, requiring the efforts of hundreds of people. Add to that the cost, and it's easy to justify the extra processes when it's already expensive to launch.

Starlink breaks that paradigm by building hundreds of satellites, quickly fixing early design flaws, and bringing in mass-production technicians to manage costs. It’s easy for me to personally imagine a Starlink pipeline where a technician integrates something new into the design and fastens everything with a plastic tie (NASA level, of course) in an hour or two, maintaining the required replacement rate of 16 satellites / day. A Starlink satellite is made up of a lot of intricate parts, but I see no reason why the cost of a thousandth unit coming off the assembly line cannot be lowered to 20 thousand. Indeed, in May, Elon wrote on Twitter that the cost of manufacturing a satellite is already lower than the cost of launch .

Let's take the average case and analyze the payback time by rounding the numbers. One Starlink satellite, which costs 100 to assemble and launch, has been operating for 5 years. Will it pay for itself, and if so, how soon?

In 5 years, the Starlink satellite will circle the Earth 30 times. In each of these one and a half hour orbits, he will spend most of the time over the ocean and probably 000 seconds over a densely populated city. In this short window, he broadcasts data, in a hurry to earn money. Assuming that the antenna supports 100 beams, and each beam transmits 100 Mbps, using a modern encoding like 4096QAM, then the satellite generates $1000 in profit per orbit - at a subscription price of $1 per 1 GB. That's enough to pay off a $100 deployment cost in a week and simplifies the capital structure a lot. The remaining 29 turns is profit minus fixed costs.

Estimated numbers can vary greatly, and in both directions. But in any case, if you can put a quality constellation of satellites into low orbit for 100 - or even for 000 million / unit - this is a serious application. Even with a ridiculously short time of use, a Starlink satellite is capable of delivering 1 Pb of data over its lifetime - at an amortized cost of $30 per GB. At the same time, when transmitting over longer distances, marginal costs practically do not increase.

To understand the significance of this model, let's briefly compare it with two other models for delivering data to consumers: the traditional fiber optic cable, and the satellite constellation offered by a company that does not specialize in satellite launches.

SEA-WE-ME - large underwater internet cableconnecting France and Singapore was put into operation in 2005. Bandwidth - 1,28 Tb / s., Deployment cost - $ 500 million. If it runs at 10% capacity for 100 years, and the overhead costs are 100% of the capital costs, then the transfer price will be $0,02 per 1 GB. Transatlantic cables are shorter and slightly cheaper, but the submarine cable is just one entity in a long line of people who want money for data transfers. The average estimate for Starlink is 8 times cheaper, and at the same time they have "all inclusive".

How is this possible? The Starlink satellite includes all the complex electronic switching stuff needed to link fiber optic cables, only it uses vacuum instead of expensive and fragile wire for data transmission. Space transmission reduces the number of cozy and obsolete monopolies, allowing users to communicate through even less hardware.

Comparable to competing satellite developer OneWeb. OneWeb plans to create a constellation of 600 satellites, which it will launch through commercial vendors at a price of about $20 per 000 kg. The weight of one satellite is 1 kg, i.e., in an ideal scenario, the launch of one unit will be approximately 150 million. The cost of satellite hardware is estimated at 3 million per satellite, i.e. by 1, the cost of the entire grouping will be 2027 billion. Tests conducted by OneWeb showed a throughput of 2,6 Mb / s. at the peak, ideally, for each of the 50 beams. Following the same scheme by which we calculated the cost of Starlink, we get: each OneWeb satellite generates $ 16 per orbit, and in just 80 years it will bring $ 5 million - barely covering launch costs, if we also count data transmission to remote regions . Total we get $ 2,4 for 1,70 GB.

Gwynn Shotwell was recently quoted as saying that Starlink is allegedly 17 times cheaper and faster than OneWeb, which implies a competitive price of $0,10 per GB. And this is with the original Starlink configuration: with less optimized production, launch on the Falcon and data transfer restrictions - and only with coverage of the northern United States. It turns out that SpaceX has an undeniable advantage: today they can launch a much more suitable satellite at a price (per unit) 1 times lower than that of competitors. Starship will increase the lead by a factor of 15, if not more, so it's not hard to imagine SpaceX launching 100 satellites by 2027 for less than $30 billion, most of which it will provide from its own wallet.

I'm sure there are more optimistic analyzes regarding OneWeb and other budding constellation developers, but I don't know how they work yet.

Recently Morgan Stanley countedthat Starlink satellites will cost 1 million for assembly and 830 thousand for launch. Gwynn Shotwell, replied: he de "took such a ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo hard. Curiously, the numbers are similar to our calculations for OneWeb spending, and roughly 10 times higher than the original Starlink estimate. The use of Starship and commercial satellite manufacturing could reduce the cost of deploying satellites to around 35/unit. And this is an astonishingly low figure.

The last point remains - to compare the profit per 1 W of solar energy generated for Starlink. According to the photos on their website, each satellite's solar array is approximately 60 sq.m. on average generates approximately 3 kW or 4,5 kWh per turn. It is estimated that each orbit will generate $1000 and each satellite will generate approximately $220 per kWh. This is 10 times more than the wholesale cost of solar energy, which once again confirms: extracting solar energy in space is a hopeless enterprise. And microwave modulation for data transmission is an exorbitant added cost.

Architecture

In the previous section, I rather roughly introduced a non-trivially significant part of the Starlink architecture - how it works with a highly uneven population density of the planet. The Starlink satellite emits focused beams that form spots on the planet's surface. Subscribers within the spot share one bandwidth. The dimensions of the spot are determined by fundamental physics: initially its width is (satellite height x microwave length / antenna diameter), which for a Starlink satellite is, at best, a couple of kilometers.

In most cities, the population density is about 1000 people/sq. km, although in some places it is higher. In some areas of Tokyo or Manhattan, there can be more than 100 people per spot. Fortunately, any such densely populated city has a competitive domestic market for broadband internet, not to mention a highly developed mobile phone network. But be that as it may, if at any given time there are many satellites of the same constellation above the city, the throughput can be increased by spatially diversifying the antennas, as well as by distributing frequencies. In other words, dozens of satellites can focus the most powerful beam at one point, and users in that region will use ground terminals that will distribute the request among the satellites.

If at the initial stages the most suitable market for selling services is remote, rural or suburban areas, then funds for further launches will come from better services specifically to densely populated cities. The scenario is the exact opposite of the standard market expansion pattern, in which competitive city-centric services inevitably suffer a decline in profits as they attempt to expand into poorer and less densely populated areas.

A few years ago when I did the math, this was the best population density map.

Starlink is a big deal

I took the data from this image and compiled the 3 plots below. The first shows the frequency of land area by population density. The most interesting thing is that most of the Earth is not inhabited at all, while practically no region has more than 100 people per sq. km.

Starlink is a big deal

The second graph shows the frequency of people by population density. And although most of the planet is uninhabited, the bulk of people live in areas where there are 100-1000 people per sq. km. The extended nature of this peak (an order of magnitude larger) reflects bimodality in urbanization patterns. 100 people/sq.km. - this is a relatively sparsely populated rural area, while the figure of 1000 people / sq. km. characteristic of the suburbs. City centers easily show 10 people/sq.km, but the population of Manhattan is 000 people/sq.km.

Starlink is a big deal

The third graph shows population density by latitude. It can be seen that almost all people are concentrated in the range from 20-40 degrees north latitude. So, by and large, it has developed geographically and historically, since a huge part of the southern hemisphere is occupied by the ocean. Yet this population density is a daunting challenge for the group's architects, as satellites spend equal amount of time in both hemispheres. Moreover, a satellite orbiting the Earth, at an angle of, say, 50 degrees, will spend more time closer to the indicated boundaries in latitude. This is why Starlink only needs 6 orbits to serve the north of the US, while 24 to cover the equator.

Starlink is a big deal

Indeed, if we combine the population density graph with the satellite constellation density graph, the choice of orbits becomes obvious. Each bar graph represents one of four SpaceX reports to the FCC. Personally, it seems to me that each new report is like an addition to the previous one, but in any case, it is not difficult to see how additional satellites increase capacity over the corresponding regions in the northern hemisphere. In contrast, there is an impressive amount of unused bandwidth over the southern hemisphere - rejoice, dear Australia!

Starlink is a big deal

What happens to user data when it reaches the satellite? In the original version, the Starlink satellite immediately transmitted them back to a dedicated ground station near the service areas. This configuration is called "direct relay". In the future, Starlink satellites will be able to communicate with each other via laser. Data exchange will peak over densely populated cities, but data can be distributed over a network of lasers in two dimensions. In practice, this means that there is a huge opportunity for a hidden backhaul in a network of satellites, that is, user data can be "retransmitted to Earth" at any suitable location. In practice, it seems to me that SpaceX ground stations will be combined with traffic exchange nodes outside the cities.

It turns out that satellite-to-satellite communication is not a trivial task if the satellites do not move together. The most recent reports to the FCC report 11 distinct satellite orbital groups. Within a given group, the satellites move at the same height, at the same inclination, with the same eccentricity, which means that lasers can find satellites in close proximity relatively easily. But the velocities of approach between groups are measured in kilometers per second, so communication between groups, if possible, should be via short, quickly controlled microwave links.

Orbital group topology is like the wave-particle theory of light and doesn't really apply to our example, but I think it's great, so I included it in the article. If you are not interested in this section, skip directly to "Limitations of Fundamental Physics".

A torus - or donut - is a mathematical object defined by two radii. It is quite simple to draw circles on the surface of a torus: parallel or perpendicular to its shape. You may find it interesting to discover that there are two other families of circles that can be drawn on the surface of a torus, and both pass through a hole in its center and around the contour. This is the so-called. "circles of Vallarso", and I used this design when I designed the toroid for the Burning Man Tesla Coil in 2015.

And although the orbits of the satellites are, strictly speaking, ellipses, not circles, the same construction applies in the case of Starlink. A constellation of 4500 satellites on several orbital planes, all at the same angle, form a continuously moving layer above the Earth's surface. A northward-facing layer above a given latitude point turns around and moves back south. To avoid collisions, the orbits will be slightly elongated, so that the northward moving layer will be several kilometers higher (or lower) than the one moving south. Together, both of these layers form a blown-shaped torus, as shown below in a highly exaggerated diagram.

Starlink is a big deal

Let me remind you that within this torus, communication is carried out between neighboring satellites. In general terms, there are no direct and long-term connections between satellites in different layers, since the convergence rates for laser guidance are too high. The trajectory of data transmission between layers, in turn, passes above or below the torus.

A total of 30 satellites will be located in 000 nested tori far behind the ISS orbit! This diagram shows how all these layers are packed, without exaggerated eccentricity.

Starlink is a big deal

Starlink is a big deal

And finally, you should think about the optimal flight altitude. There is a dilemma: low altitude, which gives more throughput with smaller beam sizes, or high altitude, which allows you to cover the entire planet with fewer satellites? Over time, reports to the FCC from SpaceX have spoken of ever-lower altitudes as Starship improves to enable faster deployment of larger constellations.

Low altitude also has other benefits, including a reduced risk of space debris impact or the negative effects of equipment failure. Due to increased atmospheric drag, the lowest Starlink satellites (330 km) will burn out within a few weeks of losing attitude control. Indeed, 300 km is an altitude at which satellites almost never fly, and maintaining the altitude will require a built-in Krypton electric rocket engine, as well as a streamlined design. Theoretically, a satellite with a fairly pointed shape, powered by an electric rocket engine, can maintain a stable altitude of 160 km, but SpaceX is unlikely to launch satellites so low, because there are still a few tricks in store to increase throughput.

Limitations of fundamental physics

It seems unlikely that satellite deployment prices will ever drop much below $35, even if manufacturing is advanced and fully automated, and Starship ships are fully reusable, and it is not yet fully known what restrictions physics will impose on a satellite. The above analysis assumes a peak throughput of 80 Gb/s. (if rounded up to 100 beams, each of which is capable of transmitting 100 Mb / s).

The bandwidth limit of the channel is set to Shannon-Hartley theorem and is given in the bandwidth statistics (1+SNR). Bandwidth is often limited available spectrum, while SNR is the available satellite energy, background noise and channel interference due to antenna imperfections. Another notable hurdle is processing speed. The latest Xilinx Ultrascale+ FPGAs have GTM-serial throughput up to 58 Gb/s., which is good given the current bandwidth limitations without developing custom ASICs. But even then 58 Gb / s. will require an impressive frequency distribution, most likely in the Ka-band or V-band. V (40–75 GHz) has more accessible cycles, but is subject to more absorption by the atmosphere, especially in areas of high humidity.

Are 100 rays practical? This problem has two aspects: beamwidth and phased array element density. The beamwidth is determined by the wavelength divided by the diameter of the antenna. Digital phased array antenna is still a specialized technology, but the maximum usable dimensions are determined by the width reflow ovens (approx. 1m), and using radio frequency communications is more expensive. The wave width in the Ka-band is about 1 cm, while the beam width should be 0,01 radians - with a spectrum width of 50% of the amplitude. Assuming a beam solid angle of 1 steradian (similar to the coverage of a 50mm camera lens), then 2500 individual beams would be sufficient in this area. Linearity implies that 2500 beams would require a minimum of 2500 antenna elements within the array, which is in principle feasible, though difficult. And it will all get very hot!

A total of 2500 channels, each of which supports 58 Gb / s, is a huge amount of information - if roughly, then 145 Tb / s. For comparison, all Internet traffic in 2020 expected at an average of 640 Tb / s. Good news for those concerned about the fundamentally low bandwidth of satellite internet. If a constellation of 30 satellites is operational by 000, global Internet traffic will potentially reach 2026 Tb/s. If half of this is delivered by ~800 satellites over densely populated areas at any given time, then the peak throughput per satellite is about 500 Gb/s, which is 800 times higher than our original baseline estimate, i.e. e. the inflow of finance potentially grows 10 times.

For a satellite in a 330 km orbit, a beam of 0,01 radians covers an area of ​​10 square kilometers. In especially densely populated areas like Manhattan, up to 300 people live in this area. What if they all sit down to watch Netflix (000 Mbps in HD quality) at the same time? The total data request will be 7 GB/s, which is about 2000 times the current hard limit imposed by the serial output FPGA. There are two ways out of this situation, of which only one is physically possible.

The first is to put more satellites into orbit, so that at any given time more than 35 pieces hang over areas of increased demand. If we again take 1 steradian for a reasonable addressable area of ​​the sky and an average orbital altitude of 400 km, we get a constellation density of 0,0002/sq km, or 100 in total - if they are evenly distributed over the entire surface of the globe. Recall that SpaceX's select orbits dramatically increase coverage over densely populated areas within 000-20 degrees north latitude, and now the number of 40 satellites seems magical.

The second idea is much cooler, but, sadly, unrealizable. Recall that the beamwidth is determined by the width of the phased antenna array. What if a lot of arrays on several satellites combine the powers, creating a narrower beam - just like radio telescopes like the same VLA (very large antenna system)? This method comes with one complication: the basis between the satellites will need to be calculated carefully - with sub-millimeter accuracy - in order to stabilize the phase of the beam. And even if this were possible, the resulting beam would hardly contain the sidelobes, due to the low density of the satellite constellation in the sky. On the ground, the beam width would narrow down to a few millimeters (enough to track a cell phone antenna), but there would be millions of them due to weak intermediate nulling. Thank you the curse of the thinned antenna array.

It turns out that channel separation by angle separationβ€”because the satellites are spaced across the skyβ€”provides adequate improvements in throughput without violating the laws of physics.

Application

What is the Starlink customer profile? By default, these are hundreds of millions of users who have antennas the size of a pizza box on their rooftops, but there are other sources of high income.

In remote and rural areas, ground stations do not need phased array antennas to maximize beamwidth, so smaller user equipment can be used, from IoT asset trackers to pocket satellite phones, emergency beacons or scientific animal tracking instruments.

In dense urban environments, Starlink will provide the primary and backup backhaul for the cellular network. Each cell tower could have a high-performance ground station on top, but use ground power supplies for amplification and transmission over the last mile.

And finally, even in crowded areas during the initial rollout, there is the possibility of using for low-orbit satellites with exceptionally minimal delay. Financial companies themselves are putting a lot of money into your hands - just a little faster to get vital data from all over the world. And even though the data through Starlink will have a longer path than usual - through space - the speed of light propagation in a vacuum is 50% higher than in quartz glass, and this more than pays for the difference when transmitting over longer distances.

Negative consequences

The last section is devoted to negative consequences. The purpose of the article is to rid you of misconceptions about the project, and the potential negative consequences of disputes cause the most. I will give some information, refraining from unnecessary interpretations. I'm still not a clairvoyant, and I don't have insiders from SpaceX either.

The most, in my opinion, the most serious consequences are increased access to the Internet. Even in my hometown of Pasadena, a bustling and tech-rich city with a population of over a million, home to several observatories, a world-class university, and NASA's largest facility, choice is limited when it comes to internet services. Across the US and the rest of the world, the Internet has become a rent-seeking utility service, with ISPs just squeezing out their $50 million a month in a cozy, non-competitive environment. Perhaps, any service supplied to apartments and residential buildings is a communal apartment, but the quality of Internet services is less even than water, electricity or gas.

The problem with the status quo is that, unlike water, electricity or gas, the Internet is still young and rapidly evolving. We are constantly finding new uses for it. The most revolutionary is still not open, but package plans stifle the possibility of competition and innovation. Billions of people are left behind digital revolution due to circumstances of birth, or because their country is too far from the main submarine cable. In large regions of the planet, the Internet is still delivered by geostationary satellites, at extortionate prices.

Starlink, on the other hand, continuously distributing the Internet from the sky, violates this model. I don't yet know of any other better way to connect billions of people to the Internet. SpaceX is well on its way to becoming an ISP and potentially an internet company that rivals Google and Facebook. I bet you didn't think of that.

That satellite internet is the best option is not obvious. SpaceX, and only SpaceX, is in a position to quickly create a vast constellation of satellites that alone has killed a decade to break the government-military monopoly on launching spacecraft. Even if Iridium were to outsell cell phones by a factor of ten, it still wouldn't achieve widespread adoption using traditional launch pads. Without SpaceX and its unique business model, the chances are high that global satellite internet will simply never happen.

The second major blow will come to astronomy. After the launch of the first 60 Starlink satellites, there was a wave of criticism from the international astronomical community, saying that the multiply increased number of satellites would block their access to the night sky. There is a saying: among astronomers, he is cooler who has a larger telescope. Without exaggeration, doing astronomy in the modern era is an extremely difficult task, reminiscent of a continuous struggle to improve the quality of analysis against the backdrop of growing light pollution and other sources of noise.

The last thing an astronomer needs is thousands of bright satellites flashing in the focus of a telescope. Indeed, the original Iridium constellation was infamous for having "blooms" due to large panels reflecting sunlight onto small areas of the Earth. It happened that they reached the brightness of a quarter of the Moon and sometimes even accidentally damaged sensitive astronomical sensors. The fear that Starlink will invade the radio bands used in radio astronomy is not unfounded either.

If you download a satellite tracking application, you can see dozens of satellites flying in the sky on a clear evening. Satellites are visible after sunset and before dawn, but only when they are illuminated by the sun's rays. Later, during the night, the satellites are invisible in the Earth's shadow. Tiny, extremely distant, they move very fast. There is a chance that they will obscure a distant star for less than a millisecond, but I think even detecting this is one more hemorrhoids.

The strong concern about sky flare was born from the fact that the layer of satellites of the first launch was lined up close to the Earth's terminator, i.e. night after night Europe - and it was summer - watched the epic picture of satellites flying through the sky in the evening twilight. Further, simulations based on FCC reports have shown that satellites in 1150 km orbit will be visible even after astronomical twilight has passed. In general, twilight goes through three stages: civil, maritime and astronomical, i.e. when the sun is 6, 12 and 18 degrees below the horizon respectively. At the end of astronomical twilight, the sun's rays are about 650 km from the surface at the zenith, well outside the atmosphere and most of low Earth orbit. Based on data from Starlink website, I believe that all satellites will be placed at an altitude below 600 km. In this case, they can be seen at dusk, but not after nightfall, which will significantly reduce the potential consequences for astronomy.

The third problem is debris in orbit. IN previous post I pointed out that satellites and debris below 600 km would de-orbit within a few years due to atmospheric drag, greatly reducing the possibility of Kessler's syndrome. SpaceX messes around with dirt like they don't care about space junk at all. Here I am looking at the details of the implementation of Starlink, and it's hard for me to imagine a better way to reduce the amount of debris in orbit.

The satellites are launched to an altitude of 350 km, then fly off on built-in engines to their intended orbit. Any satellite that dies on launch will be out of orbit in a few weeks, and won't be reeling anywhere else for thousands of years. This placement strategically involves testing for free entry. Further, Starlink satellites are flat in cross-section, which means that by losing altitude control, they enter the dense layers of the atmosphere.

Few people know that SpaceX has become a pioneer in astronautics, starting to use alternative types of mounting instead of squibs. Virtually all launch pads use squibs when deploying stages, satellites, radomes, etc., increasing the potential for debris. SpaceX also deliberately deorbits the upper stages, preventing them from hanging in space forever, so that they do not decay and disintegrate in the harsh space environment.

Finally, the last issue I would like to mention is the chance that SpaceX will replace the existing Internet monopoly by creating its own. In its niche, SpaceX has already monopolized launches. Only the desire of rival governments to gain guaranteed access to space prevents expensive and obsolete rockets, which are often assembled by large monopolistic defense contractors, from being scrapped.

It's not hard to imagine SpaceX launching 2030 of its satellites a year in 6000, plus a few spy satellites for good measure. Cheap and reliable SpaceX satellites will sell "rack space" for third-party devices. Any university that builds a space-capable camera can put it into orbit without having to cover the cost of building an entire space platform. With such advanced and unlimited access to space, Starlink is already associated with satellites, while historical manufacturers are becoming a thing of the past.

There are examples in history of far-sighted companies that have occupied such a huge niche in the market that their names have become household names: Hoover, Westinghouse, Kleenex, Google, Frisbee, Xerox, Kodak, Motorola, IBM.

The problem can arise when a pioneer company engages in anti-competitive practices in order to maintain its market share, although this has often been allowed since President Reagan. SpaceX could keep Starlink's monopoly by forcing other constellation developers to launch satellites on vintage Soviet rockets. Similar actions taken United Aircraft and Transportation company, coupled with price fixing for the transportation of mail, led it to collapse in 1934. Fortunately, SpaceX is unlikely to maintain an absolute monopoly on reusable rockets forever.

Even more worrisome is that SpaceX's deployment of tens of thousands of low-orbit satellites could be designed as a co-option of the commons. A private company, pursuing personal gain, is grabbing into permanent ownership the once public and unoccupied orbital positions. And while SpaceX's innovations made it possible to actually make money in a vacuum, much of SpaceX's intellectual capital was built with billions of dollars in research budgets.

On the one hand, we need laws that will protect the means of private investment, research and development. Without this protection, innovators will not be able to finance ambitious projects, or they will move their companies to where such protection is provided to them. In any case, the public suffers because profits are not generated. On the other hand, laws are needed that will protect the people, the nominal owners of the public domain including the sky, from rent-seeking private entities that annex public goods. In and of itself, neither is true or even possible. SpaceX developments provide a chance to find a happy medium in this new market. We will realize that it has been found when we maximize the frequency of innovation and the creation of social welfare.

Final Thoughts

I wrote this article as soon as I completed another one - about Starship. It's been a hot week. Both Starship and Starlink are revolutionary technologies that are being created right before our eyes, in our lives. If I see my grandchildren grow up, they will be more surprised that I am older than Starlink, and not that in my childhood there were no cellular (museum pieces) or public Internet per se.

The rich and the military have been using satellite internet for a long time, but ubiquitous, generic, and cheap Starlink is simply not possible without Starship.

The launch has been talked about for a long time, but Starship, which is quite cheap and therefore an interesting platform, is impossible without Starlink.

Manned astronautics has been talked about for a long time, and if you β€” jet fighter pilot, and at the same time a neurosurgeonthen you have the green light. With Starship and Starlink, human space exploration is an achievable, near future, with a stone's throw from an orbital outpost to industrialized cities in deep space.

Source: habr.com

Add a comment