Nick Bostrom: Are We Living in a Computer Simulation (2001)

I collect all the most important texts of all times and peoples that influence the worldview and the formation of a picture of the world ("Ontol"). And here I thought and thought and put forward a daring hypothesis that this text is more revolutionary and important in our understanding of the structure of the world than the Copernican revolution and the works of Kant. In Runet, this text (full version) was in a terrible state, I brushed it a little and, with the permission of the translator, I publish it for discussion.

Nick Bostrom: Are We Living in a Computer Simulation (2001)

β€œAre you living in a computer simulation?”

by Nick Bostrom [Published in Philosophical Quarterly (2003) Vol. 53, no. 211, pp. 243-255. (First version: 2001)]

This article claims that at least one of the following three propositions is true:

  • (1) it is very likely that humanity will die out before reaching the "post-human" phase;
  • (2) every posthuman civilization with an extremely low probability will run a significant number of simulations of its evolutionary history (or variations thereof) and
  • (3) we almost certainly living in a computer simulation.

It follows from this that the probability of being in the phase of a post-human civilization that can run simulations of its predecessors is equal to zero, unless we accept as true the case that we are already living in a simulation. Other consequences of this result are also discussed.

1.Vvedenie

Many works of science fiction, as well as the predictions of serious futurists and technology researchers, predict that enormous amounts of computing power will be available in the future. Let's assume that these predictions are correct. For example, future generations with their super-powerful computers will be able to run detailed simulations of their predecessors or people like their predecessors. Because their computers will be so powerful, they will be able to run many of these simulations. Let us assume that these simulated people are conscious (and they will be if the simulation is highly accurate and if a certain widely accepted concept of consciousness in philosophy is correct). It follows that the greatest number of minds like ours do not belong to the original race, but rather belong to humans simulated by the advanced descendants of the original race. Based on this, it can be argued that it is reasonable to expect that we are among simulated, and not among the original, natural biological minds. Thus, if we do not consider that we are now living in a computer simulation, then we should not assume that our descendants will run many simulations of their ancestors. This is the main idea. In the rest of the work, we will consider it in more detail.

In addition to the interest that this thesis may be of for those involved in futuristic discussions, there is also a purely theoretical interest. This proof is a stimulus for formulating some methodological and metaphysical problems, and also offers some natural analogies to traditional religious concepts, and these analogies may seem surprising or suggestive.

The structure of this article is as follows: we will first formulate some assumption that we need to import from the philosophy of mind in order for this proof to work. We will then consider some empirical reasons for believing that running a vast array of simulations of human minds will be available to a future civilization that will develop many of those technologies that have been made clear that they do not contradict known physical laws and engineering limitations.

This part is not necessary from a philosophical point of view, but nevertheless encourages you to pay attention to the main idea of ​​the article. This is followed by a summary of the proof, using some simple applications of probability theory, and a section justifying the weak principle of equivalence that this proof uses. In the end, we will discuss some interpretations of the alternative mentioned at the beginning, and this will be the conclusion of the proof about the simulation problem.

2. Support independence assumption

A common assumption in the philosophy of mind is that of carrier independence. The idea is that mental states can occur in any vehicle from a wide class of physical vehicles. Provided that the correct set of computational structures and processes is embodied in the system, conscious experiences can arise in it. It is not essential that carbon-based biological neural networks embody intracranial processes: silicon-based processors inside computers can do exactly the same trick. Arguments in favor of this thesis have been put forward in the existing literature, and although it is not entirely consistent, we will take it as a given here.

The proof we offer here, however, does not depend on any very strong version of functionalism or computationalism. For example, we should not accept that the carrier-independence thesis is necessarily true (both analytically and metaphysically) – but only that, in fact, a computer running an appropriate program could be conscious . Moreover, we should not assume that in order to create consciousness in a computer, we would have to program it in such a way that it behaves like a person in all cases, passes the Turing test, etc. We only need a weaker the assumption that in order to create subjective experiences, it is sufficient that the computational processes in the human brain be structurally copied in appropriate high-precision details, for example, at the level of individual synapses. This refined version of carrier independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than synapses clearly play a role in human cognition and learning. The carrier-independence thesis is not that the effects of these chemicals are small or negligible, but that they affect subjective experience only through direct or indirect effects on computational activity. For example, if there are no subjective differences without there being also a difference in synaptic discharges, then the required simulation detail is at the synaptic level (or higher).

3.Technological limits of computing

At the current level of technological development, we do not have enough powerful hardware or software to create conscious minds on a computer. However, strong arguments have been made that if technological progress continues unabated, then these limitations will eventually be overcome. Some authors argue that this phase will come in just a few decades. However, for the purposes of our discussion, no assumptions about the time scale are required. The simulation proof works just as well for those who believe that it will take hundreds of thousands of years to reach the β€œposthuman” phase of development, when humanity will acquire most of the technological abilities that can now be shown to be consistent with physical laws and with material and energy restrictions.

This mature phase of technological development will make it possible to turn planets and other celestial resources into computers of colossal power. At the moment, it is difficult to be sure about any limits on the computer power that will be available to posthuman civilizations. Since we still do not have a β€œtheory of everything”, we cannot rule out the possibility that new physical phenomena, forbidden by modern physical theories, can be used to overcome the limitations that, according to our current understanding, impose theoretical limits on the processing of information inside given piece of matter. With a much greater degree of reliability, we can establish lower bounds for posthuman computing, assuming implementation of only those mechanisms that are already understood. For example, Eric Drexler sketched a sugar cube-sized system (excluding the cooling and power system) that could perform 1021 operations per second. Another author gave a rough estimate of 1042 operations per second for a planet-sized computer. (If we learn how to build quantum computers, or learn how to build computers from nuclear matter or plasma, we can get even closer to the theoretical limits. Seth Lloyd calculated an upper limit for a 1kg computer of 5*1050 logical operations per second performed on 1031 bit. However, for our purposes, it is sufficient to use more conservative estimates, which imply only the currently known principles of operation.)

The amount of computer power needed to emulate a human brain lends itself to exactly the same rough estimate. One estimate, based on how computationally expensive it would be to copy the functioning of a piece of neural tissue that we have already understood and whose functionality has already been copied in silicon (namely, the contrast enhancement system in the retina was copied), gives an estimate of approximately 1014 operations per second. An alternative estimate, based on the number of synapses in the brain and the frequency of their firing, gives a value of 1016-1017 operations per second. Accordingly, even more computing power may be required if we want to simulate in detail the inner workings of synapses and dendritic branches. However, it is highly likely that the human central nervous system has a certain amount of redundancy at the micro level to compensate for the unreliability and noise of its neural components. Therefore, one would expect a significant increase in efficiency when using more reliable and flexible non-biological processors.

Memory is no more of a limitation than processor power. Moreover, since the maximum flow of human sensory data is on the order of 108 bits per second, simulating all sensory events would require a negligible cost compared to simulating cortical activity. Thus, we can use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating the human mind.

If the environment is included in the simulation, this will require additional computer power - the amount of which depends on the size and detail of the simulation. Simulating the entire universe down to the quantum level is obviously impossible, unless some new physics is discovered. But it takes far less to get a realistic simulation of the human experienceβ€”just enough to make sure that simulated people interacting in a normal human way with a simulated environment won't notice any difference. The microscopic structure of the Earth's interior can easily be omitted. Distant astronomical objects can be subjected to a very high level of compression: an exact resemblance should be only in a narrow range of properties that we can observe from our planet or from a spacecraft inside the solar system. On the surface of the Earth, macroscopic objects in uninhabited places must be continuously simulated, but microscopic phenomena can be filled and following the best practicesi.e. as needed. What you see through an electron microscope should look unsuspicious, but you usually have no way of checking its consistency with unobservable parts of the microcosm. Exceptions occur when we deliberately design systems to harness unobservable microscopic phenomena that operate according to known principles to produce results that we can independently verify. The classic example of this is the computer. Simulation, therefore, must include continuous simulations of computers down to the level of individual logic elements. This is not a problem, since our current computing power is negligible by posthuman standards.

Moreover, a posthuman simulation maker would have enough processing power to track in detail the state of thought in all human brains at all times. Thus, when he finds that some person is ready to make some observation about the microcosm, he can fill in the simulation with a sufficient level of detail, as much as necessary. If some kind of error happens, the simulation director can easily edit the states of any brain that knows about the anomaly before it crashes the simulation. Or the director can rewind the simulation a few seconds and restart it in a way that avoids the problem.

From this it follows that the most costly in creating a simulation that is indistinguishable from physical reality for the human consciousnesses that reside in it, will be the creation of simulations of organic brains down to the neuronal or subneuronal level. While it is not possible to give a very accurate estimate of the cost of a realistic simulation of human history, we can use an estimate of 1033-1036 operations as a rough estimate.

As we gain more experience in creating virtual reality, we will gain a better understanding of the computational requirements that are needed to make such worlds look realistic to their visitors. But even if our estimate is wrong by several orders of magnitude, it doesn't really matter to our proof. We noted that a rough estimate of the computing power of a planet-sized computer is 1042 operations per second, and this is only taking into account already known nanotechnological designs, which are probably far from optimal. One such computer can simulate the entire mental history of mankind (let's call it an ancestral simulation) using only one millionth of its resources in 1 second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that a post-human civilization can run an enormous number of ancestor simulations, even if it spends only a small fraction of its resources on it. We can come to this conclusion even if there is significant error in all our estimates.

  • Posthuman civilizations will have enough computing resources to run a huge number of ancestor simulations, even using a very small fraction of their resources for this purpose.

4. Simulation Proof Kernel

The main idea of ​​this article can be expressed as follows: if there is a significant chance that our civilization will someday reach the post-human stage and run many ancestral simulations, then how can we prove that we are not living in one such simulation?

We will develop this idea in the form of a rigorous proof. Let's introduce the following notation:

Nick Bostrom: Are We Living in a Computer Simulation (2001) - the proportion of all human-level civilizations that survive to the post-human stage;
N is the average number of ancestral simulations that a posthuman civilization runs;
H is the average number of people who lived in a civilization before it reached the posthuman stage.

Then the real proportion of all observers with human experience who live in the simulation is:

Nick Bostrom: Are We Living in a Computer Simulation (2001)

Let us denote as the proportion of posthuman civilizations that are interested in running ancestor simulations (or that contain at least some number of individual beings that are interested in doing so and have significant resources to run a significant number of simulations) and as the average number of ancestor simulations run by such interested civilizations, we get:

Nick Bostrom: Are We Living in a Computer Simulation (2001)

And hence:

Nick Bostrom: Are We Living in a Computer Simulation (2001)

Due to the colossal computational power of posthuman civilizations, this is an extremely large value, as we saw in the previous section. Looking at the formula (*) we can see that at least one of the following three assumptions is true:

Nick Bostrom: Are We Living in a Computer Simulation (2001)

5. Soft principle of equivalence

We can take it a step further and conclude that if item (3) is true, you can almost certainly be sure that you are in a simulation. Generally speaking, if we know that a proportion x of all observers with human-type experience lives in a simulation, and we have no additional information that shows that our own private experience is more or less likely to be embodied in a machine, and not in vivo than other kinds of human experience, and then our confidence that we are in a simulation should be equal to x:

Nick Bostrom: Are We Living in a Computer Simulation (2001)

This step is justified by the very weak principle of equivalence. Let's separate two cases. In the first case, which is simpler, all the minds being examined are like yours, in the sense that they match your mind exactly qualitatively: they have the same information and the same experiences that you have. In the second case, minds are only similar to each other in a broad sense, being the kind of minds that are typical of human beings, but qualitatively different from each other and each has a different set of experiences. I argue that even when the minds are qualitatively different, the simulation proof still works, provided you don't have any information that answers the question of which of the various minds are simulated and which are biologically realized.

A detailed justification of the more rigorous principle, which includes both of our particular examples as trivial special cases, has been given in the literature. Lack of space makes it impossible to give the whole justification here, but we can give here one of the intuitive justifications. Let's imagine that x% of a population has a certain genetic sequence S within a certain part of their DNA, which is usually called "junk DNA". Assume further that there are no manifestations of S (except those that may appear in genetic testing) and that there are no correlations between the possession of S and any external manifestations. It is then quite obvious that before your DNA is sequenced, it is rational to attribute x% certainty to the hypothesis that you have fragment S. And this is quite independent of the fact that people who have S have minds and experiences that are qualitatively different from those of people who do not have S. (They are different simply because all people have different experiences, not because there is any direct connection between S and the kind of experience that a person has.)

The same reasoning applies if S is not a property of having a particular genetic sequence, but instead the fact of being in a simulation, assuming that we have no information that allows us to predict any differences between the experiences of the simulated minds and between the experiences of the original biological minds.

It should be emphasized that the soft principle of equivalence only emphasizes the equivalence between hypotheses of which observer you are when you have no information about which observer you are. It generally does not attribute equivalence between hypotheses when you don't have specific information about which of the hypotheses is true. Unlike Laplace and other stronger principles of equivalence, he is thus not subject to Bertrand's paradox and other similar predicaments that make it difficult to apply the principles of equivalence unrestrictedly.

Readers familiar with the Doomsday argument (DA) (J. Leslie, β€œIs the End of the World Nigh?” Philosophical Quarterly 40, 158: 65-72 (1990)), may be concerned that the principle of equivalence, applied here relies on the same assumptions that are responsible for kicking the ground out from under the DA, and that the counterintuitiveness of some of the latter's conclusions casts a shadow on the validity of the simulation reasoning. This is wrong. DA relies on the much more rigorous and controversial premise that a person must reason as if they were a random sample of the entire set of people who have ever lived and will live (in the past, present and future), despite the fact that that we know that we live at the beginning of the XNUMXst century, and not at some point in the distant future. The soft uncertainty principle only applies to those cases where we have no additional information about which group of people we belong to.

If betting is some basis for rational belief, then if everyone bets on whether they are in a simulation or not, then if people use the mild uncertainty principle and bet that they are in a simulation, based on on the knowledge that most of the people are in it, then almost everyone will win their bets. If they bet that they are not in a simulation, then almost everyone will lose. It seems more useful to follow the principle of soft equivalence. Further, one can imagine a sequence of possible situations in which an increasing proportion of people live in simulations: 98%, 99%, 99.9%, 99.9999%, and so on. As we approach the upper limit, where everyone is living in a simulation (from which one can deductively infer that everyone is in a simulation), it seems reasonable to require that the certainty one assigns to being in a simulation smoothly continuously approaches the limiting limit of the total confidence.

6. Interpretation

The possibility indicated in paragraph (1) is quite clear. If (1) is true, then humanity will almost certainly not be able to reach the posthuman level; no species at our level of development becomes posthuman, and it is difficult to find any justification for thinking that our own species has any advantage or special protection against future catastrophes. Under condition (1), therefore, we must attribute high certainty to Doom (DOOM), i.e., the hypothesis that humanity will disappear before reaching the post-human level:

Nick Bostrom: Are We Living in a Computer Simulation (2001)

One can imagine a hypothetical situation in which we have data that overlaps our knowledge of fp. For example, if we discover that a giant asteroid is about to hit us, we can assume that we have been extremely unlucky. We can, in this case, ascribe more credibility to the Doom hypothesis than our expectation of the proportion of human-level civilizations that fail to achieve post-humanity. In our case, however, we don't seem to have any reason to think that we are special in this respect, for better or worse.

Assumption (1) does not by itself mean that we are likely to become extinct. It suggests that we are unlikely to reach the posthuman phase. This possibility could mean, for example, that we will remain at or slightly above our current level for a long time before we die out. Another possible reason for (1) to be true is that technological civilization is likely to collapse. At the same time, primitive human societies will remain on Earth.

There are many ways in which humanity can become extinct before it reaches the posthuman phase of development. The most natural explanation for (1) is that we will die out as a result of the development of some powerful but dangerous technology. One candidate is molecular nanotechnology, the mature stage of which will allow the creation of self-replicating nanorobots that can feed on dirt and organic matter - something like a mechanical bacterium. Such nanorobots, if designed with malicious intent, could lead to the death of all life on the planet.

The second alternative to the conclusion of the simulation argument is that the proportion of posthuman civilizations that are interested in running ancestral simulations is negligible. For (2) to be true, there must be a strong convergence between the paths of advanced civilizations. If the number of ancestor simulations produced by interested civilizations is exceptionally large, then the rarity of such civilizations must be correspondingly extreme. Virtually no posthuman civilization decides to use its resources to create a large number of ancestral simulations. Moreover, almost all post-human civilizations lack individuals who have the appropriate resources and interest to run ancestral simulations; or they have enforced laws that prevent individuals from behaving according to their desires.

What force can lead to such a convergence? One might argue that advanced civilizations all as one evolve along a trajectory that leads to the recognition of the ethical prohibition of running ancestral simulations due to the suffering experienced by the inhabitants of the simulation. However, from our present point of view, it does not seem obvious that the creation of the human race is immoral. On the contrary, we tend to perceive the existence of our race as having great ethical value. Moreover, the convergence of only ethical views on the immorality of running ancestral simulations is not enough: it must be combined with the convergence of civilizational social structure, which leads to the fact that activities considered immoral are effectively prohibited.

Another possibility of convergence is that almost all individual posthumans in almost all posthuman civilizations are evolving in a direction in which they lose the drive to run ancestral simulations. This will require significant changes in the motivations that drive their post-human ancestors, since there are certainly many people who would like to run ancestor simulations if they had the opportunity. But perhaps many of our human desires will seem foolish to anyone who becomes posthuman. It may be that the scientific value of ancestral simulations to posthuman civilizations is negligible (which doesn't sound too implausible given their incredible intellectual superiority) and it may be that posthumans consider recreational activity to be a very inefficient form of enjoyment - one that can be obtained much more cheaply. through direct stimulation of the pleasure centers of the brain. One conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not have relatively wealthy independent agents who have a full range of human-like desires and who are free to act in accordance with them. .

The possibility described by inference (3) is the most intriguing from a conceptual point of view. If we live in a simulation, then the cosmos we observe is only a small piece in the totality of physical existence. The physics of the universe where the computer is located may or may not resemble the physics of the world we observe. While the world we observe is "real" to some degree, it is not located on some fundamental level of reality. It may be possible for simulated civilizations to become posthuman. They can run their ancestor simulations in turn on the powerful computers they have built in the simulated universe. Such computers would be "virtual machines," a very common concept in computer science. (Web applications written in Java script, for example, run in a virtual machine - a simulated computer - on your laptop.)

Virtual machines can be nested inside one another: it is possible to simulate a virtual machine simulating another machine, and so on, with an arbitrarily large number of steps. If we can create our own ancestral simulations, that would be strong evidence against points (1) and (2), and we would therefore have to conclude that we are living in a simulation. Moreover, we would have to suspect that the posthumans who ran our simulation are also simulated beings themselves, and their creators, in turn, may also be simulated beings.

Reality, therefore, may contain several levels. Even if the hierarchy must end at some level - the metaphysical status of this statement is quite obscure - there may be enough space for a large number of levels of reality, and this number may increase over time. (One consideration that speaks against such a layered hypothesis is that the computational cost for base-level simulators would be very large. Simulating even a single post-human civilization could be prohibitively expensive. If so, we should expect our simulation to be turned off. when we get closer to the posthuman level.)

Although all the elements of this system are naturalistic, even physical, it is possible to draw some loose analogies with religious conceptions of the world. In a sense, the posthumans who run the simulation are like gods to the humans in the simulation: posthumans create the world we see; they have an intellect superior to us; they are omnipotent in the sense that they can interfere with the workings of our world in ways that violate physical laws, and they are omniscient in the sense that they can monitor everything that happens. However, all demigods, with the exception of those who live in the fundamental level of reality, are subject to the actions of more powerful gods living in higher levels of reality.

Further chewing on these themes may end up in a naturalistic theogony that will study the structure of this hierarchy and the limitations placed on the inhabitants by the possibility that their actions at their level may influence the attitude of the inhabitants of a deeper level of reality towards them. For example, if no one can be sure that he is at a basic level, then everyone must consider the likelihood that his actions will be rewarded or punished, perhaps based on some moral criteria, by the owners of the simulation. Life after death will be a real possibility. Because of this fundamental uncertainty, even a civilization at a basic level will have an incentive to behave ethically. The fact that they have a reason to behave morally will of course be a valid reason for someone else to behave morally, and so on, forming a virtuous circle. In this way, you can get something like a universal ethical imperative, which will be observed in the personal interests of everyone, and which comes from β€œnowhere”.

In addition to ancestral simulations, more selective simulations can be imagined that include only a small group of people or one individual. The rest of the people will then be "zombies" or "shadow people" - people simulated only at a level sufficient that fully simulated people do not notice anything suspicious.

It is not clear how much cheaper it will be to simulate shadow people than real people. It is not even obvious that it is possible for an object to behave indistinguishably from a real person and still have no conscious experience. Even if such selective simulations exist, you should not be sure that you are in it before you are sure that such simulations are much more numerous than full simulations. The world would have to have about 100 billion more self-simulations (simulations of the life of only one consciousness) than there are total ancestor simulations in order for the majority of simulated people to be in self-simulations.

There is also the possibility that the simulators skip over a certain part of the mental life of the simulated beings and give them false memories of the type of experience they might have had during the missed periods. If so, one can imagine the following (stretched) solution to the problem of evil: that in reality there is no suffering in the world and that all memories of suffering are an illusion. Of course, this hypothesis can be considered seriously only in those moments when you yourself do not suffer.

Assuming we live in a simulation, what are the implications of this for us humans? Contrary to what has been said before, the consequences for humans are not particularly drastic. Our best guide to how our posthuman creators have chosen to arrange our world is the standard empirical study of the universe as we see it. Changes to much of our belief system will be rather small and benignβ€”proportionate to our lack of confidence in our ability to understand the posthuman thought system.

A correct understanding of the truth of thesis (3) should not make us "crazy" or make us quit our business and stop making plans and predictions for tomorrow. The main empirical importance of (3) at the present moment seems to lie in its role in the triple conclusion above.

We should hope that (3) is true, as this reduces the likelihood of (1), but if computational constraints make it likely that simulators will turn off the simulation before it reaches the posthuman level, then our best hope is that (2) is true. .

If we learn more about posthuman motivation and resource constraints, perhaps as a result of our development towards posthumanity, then the hypothesis that we are simulated will have a much richer set of empirical applications.

7. Π—Π°ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠ΅

A technologically mature post-human civilization would have enormous computing power. Based on this, the simulation argument shows that at least one of the following is true:

  • (1) The proportion of human-level civilizations that reach the post-human level is very close to zero.
  • (2) The proportion of posthuman civilizations that are interested in running precursor simulations is very close to zero.
  • (3) The proportion of all people with our type of experience who live in the simulation is close to one.

If (1) is true, then we will almost certainly die before we reach the posthuman level.

If (2) is true, then there must be a strongly coordinated convergence of the paths of development of all advanced civilizations, so that none of them has relatively wealthy individuals who would like to run ancestral simulations and be free to do so.

If (3) is true, then we are almost certainly living in a simulation. The dark forest of our ignorance makes it reasonable to distribute our confidence almost evenly between points (1), (2) and (3).

Unless we're already living in a simulation, our descendants will almost certainly never run an ancestor simulation.

Acknowledgements

I thank many people for their comments, and especially Amara Angelica, Robert Bradbury, Milan Cirkovic, Robin Hanson, Hal Finney, Robert A. Freitas Jr., John Leslie, Mitch Porter, Keith DeRose, Mike Treder, Mark Walker, Eliezer Yudkowsky, and anonymous referents.

Translation: Alexey Turchin

Translator's notes:
1) Conclusions (1) and (2) are non-local. They say that either all civilizations die, or everyone does not want to create simulations. This statement extends not only to the entire visible universe, not only to the entire infinity of the universe beyond the horizon of visibility, but also to the entire set of 10 * 500 degrees of universes with different properties that are possible, according to string theory. In contrast, the thesis that we live in a simulation is local. General statements are much less likely to be true than particular statements. (Compare: β€œAll people are blond” and β€œIvanov is blond” or β€œAll planets have an atmosphere” and β€œVenus has an atmosphere.”) One exception is enough to refute a general statement. Thus, the claim that we are living in a simulation is much more likely than the first two alternatives.

2) Not necessarily the development of computers - enough, for example, dreams. Who will see genetically modified and specially sharpened brains for this.

3) Simulation reasoning works in real life. Most of the images that enter our brains are simulations - movies, TV, the Internet, photographs, advertisements - and last but not least - dreams.

4) The more unusual the object we see, the more likely it is in the simulation. For example, if I see a terrible accident, then most likely I see it in a dream, on TV or in a movie.

5) Simulations can be of two types: simulations of an entire civilization and simulations of a personal history or even a single episode from the life of one person.

6) It is important to distinguish simulation from imitation - it is possible to simulate a person or civilization that never existed in nature.

7) Supercivilizations should be interested in creating simulations in order to explore different versions of their past and thus different alternatives for their development. And also, for example, to study the average frequency of other super-civilizations in space and their expected properties.

8) The problem of simulation collides with the problem of the philosophical zombie (that is, creatures devoid of qualia, like shadows on a TV screen). Simulated creatures do not have to be philosophical zombies. If there are philosophical zombies in most of the simulations, then reasoning doesn't work (since I'm not a philosophical zombie.)

9) If there are several levels of simulation, then the same level 2 simulation can be used in several different level 1 simulations by those living in level 0 simulation. In order to save computing resources. It is like many different people watching the same movie. That is, let's say I created three simulations. And each of them created 1000 subsimulations. Then I would have to stimulate 3003 simulations on my supercomputer. But if the simulations created basically the same subsimulations, then it is enough for me to simulate only 1000 simulations, presenting the result of each of them three times. That is, I will run 1003 simulations in total. In other words, one simulation can have multiple hosts.

10) Whether you are living in a simulation or not can be determined by how your life differs from the average in the direction of unique, interesting or important. The assumption here is that making simulations of interesting people living in interesting times of important change is more attractive to the authors of the simulation, whether their goals are recreational or exploratory. 70% of the people who have ever lived on Earth have been illiterate peasants. However, the effect of observational selection must be taken into account here: illiterate peasants could not ask themselves whether they were in the simulation or not, and therefore the fact that you are not an illiterate peasant does not prove that you are in the simulation. Probably, the epoch in the region of the Singularity will be of the greatest interest to the authors of the simulation, since in its region an irreversible bifurcation of the paths of civilization development is possible, which can be influenced by small factors, including the characteristics of one person. For example, I, Alexei Turchin, believe that my life is so interesting that it is more simulated than real.

11) The fact that we are in a simulation increases our risks - a) the simulation can be turned off b) the authors of the simulation can experiment on it, creating obviously unlikely situations - an asteroid fall, etc.

12) It is important to note Bostrom's words that at least one of the three is true. That is, situations are possible when some of the points are true at the same time. For example, the fact that we will die does not exclude the fact that we are living in a simulation, and that most civilizations do not create simulations.

13) Simulated people and the world around them may not look like any real people and the real world at all, it is important that they think they are in the real world. They are not able to notice the differences, because they have never seen any real world at all. Or their ability to spot differences is dulled. As it happens in a dream.

14) It is tempting to find signs of simulation in our world, manifested as miracles. But miracles can happen without simulation.

15) There is a model of the world order that removes the proposed dilemma. (but not without its controversy). Namely, this is the Kastanedovskaya-Buddhist model, where the observer generates the whole world.

16) The idea of ​​simulation implies simplification. If the simulation is accurate to the atom, then it will be the same reality. In this sense, one can imagine a situation where a certain civilization has learned to create parallel worlds with given properties. In these worlds, she can set up natural experiments, creating different civilizations. That is, it is something like the space zoo hypothesis. These created worlds will not be simulations, as they will be quite real, but they will be under the control of those who created them and can turn them on and off. And there will also be more of them, so a similar statistical reasoning applies here as in the simulation reasoning.
A chapter from the article "UFOs as a global risk factor":

UFOs are glitches in the Matrix

According to N. Bostrom (Nick Bostrom. Simulation Proof. www.proza.ru/2009/03/09/639), the probability that we live in a completely simulated world is quite high. That is, our world can be completely modeled on a computer by some kind of super-civilization. This allows the authors of the simulation to create any images in it, with goals incomprehensible to us. In addition, if the level of control in the simulation is low, then it will accumulate errors, like a computer, and there will be failures and glitches that you can notice. The men in black turn into agents of the Smiths who erase traces of glitches. Or some residents of the simulation may access some non-logged capabilities. This explanation allows us to explain any possible set of miracles, but it does not explain anything specific - why we see such manifestations, and not, say, pink elephants flying upside down. The main risk is that the simulation can be used to test the extreme conditions of the system, that is, in catastrophic modes, and that the simulation will simply be turned off if it becomes too complex or ends its function.
The main question here is the degree of control in the Matrix. If we are talking about the Matrix under very tight control, then the probability of unplanned glitches in it is small. If the Matrix is ​​simply launched and then left to fend for itself, then glitches in it will accumulate, as glitches accumulate during the operation of the operating system, as it works and as new programs are added.

The first option is implemented if the authors of the Matrix are interested in all the details of the events taking place in the Matrix. In this case, they will strictly track all the glitches and carefully erase them. If they are only interested in the final result of the work of the Matrix or one of its aspects, then their control will be less rigid. For example, when a person starts a chess program and leaves for the day, he is only interested in the result of the program, but not in details. At the same time, during the operation of the chess program, it can calculate many virtual games, in other words, virtual worlds. In other words, the authors here are interested in the statistical result of running very many simulations, and they care about the details of running one simulation only to the extent that glitches do not affect the final result. And in any complex information system, a certain number of glitches accumulate, and as the complexity of the system grows, the difficulty of removing them grows exponentially. Therefore, it is easier to put up with the presence of some glitches than to remove them in the bud.

Further, it is obvious that the set of weakly controlled systems is much larger than the set of tightly controlled ones, since the weakly controlled systems are launched in large numbers when they can be produced VERY cheaply. For example, many virtual chess games are much larger than real grandmasters, and many home operating systems are much larger than many government supercomputers.
Thus, glitches in the Matrix are acceptable as long as they do not affect the overall flow of the system. In the same way, in reality, if my font in the browser began to be displayed in a different color, then I will not restart the entire computer or demolish the operating system. But we see the same thing in the study of UFOs and other anomalous phenomena! There is a certain threshold above which neither the phenomena themselves nor their public resonance can jump. As soon as certain phenomena begin to approach this threshold, they either disappear, or people in black appear, or it turns out that it was a hoax, or someone dies.

Note that there are two types of simulations - complete simulations of the whole world and I-simulations. In the latter, the life experience of only one person (or a small group of people) is simulated. In an i-simulation, you are more likely to find yourself in an interesting role, whereas in a full simulation, 70 percent of the characters are peasants. For reasons of observational selection, I-simulations should be much more frequent - although this consideration needs further thought. But in self-simulations, the theme of the UFO should already be laid down, like the whole prehistory of the world. And it may be planted on purpose - to explore how I will handle this topic.

Further, in any information system, sooner or later, viruses are introduced - that is, parasitic information units aimed at self-replication. Such units can also appear in the Matrix (and in the collective unconscious), and the built-in anti-virus program must work against them. However, we know from the experience of using computers and from the experience of biological systems that it is easier to put up with the presence of harmless viruses than to poison them to the last. Moreover, the complete destruction of viruses often requires the demolition of the system.

Thus, it can be assumed that UFOs are viruses that use glitches in the Matrix. This explains the absurdity of their behavior, since their intelligence is limited, as well as their parasitism on people - since each person is allocated a certain amount of computing resources in the Matrix that can be used. It can be assumed that some people took advantage of glitches in the Matrix to achieve their goals, including immortality, but the same thing was done by beings from other computing environments, for example, simulations of fundamentally different worlds, which then penetrated into our world.
Another question is what is the level of depth of the simulation we are most likely to be in. It is possible to simulate the world down to an atom, but this would require enormous computational resources. Another extreme example is a first-person shooter. In it, a three-dimensional image of the area is drawn as needed, when the main character approaches a new place, based on the general plan of the area and some general principles. Or, blanks are used for some places, and the exact drawing of other places is ignored (as in the movie "13th Floor"). Obviously, the more accurate and detailed the simulation, the less often it will have glitches. On the other hand, simulations made "hastily" will contain many more glitches, but at the same time consume immeasurably less computing resources. In other words, with the same expense, one could do either one very accurate simulation, or a million approximate ones. Further, we assume that the same principle applies to simulations as to other things: namely, that the cheaper the thing, the more common it is (i.e., there are more pieces of glass in the world than diamonds, more meteorites than asteroids, and T. e.) Thus, we are more likely to be inside a cheap simplified simulation, and not inside a complex high-precision simulation. It can be argued that in the future unlimited computing resources will be available, and therefore any actor will make sufficiently detailed simulations. However, this is where the nesting doll effect comes into play. Namely, advanced simulation can create its own simulations, let's call them second-level simulations. Let's say an advanced simulation of the world of the mid-21st century (created, let's say, in the real 23rd century) can create billions of simulations of the world of the early 21st century. At the same time, it will use mid-21st century computers, which will be more limited in computing resources than 23rd century computers. (And also the real 23rd century will save on the accuracy of subsimulations, since they are not important to it.) Therefore, all the billion simulations of the beginning of the 21st century that it will create will be very economical in terms of computing resources. Because of this, the number of primitive simulations, as well as simulations that are earlier in relation to the simulated time, will be a billion times greater than the number of more detailed and later simulations, and, therefore, an arbitrary observer has a billion times greater chance of finding himself at an earlier ( at least until the advent of supercomputers capable of creating their own simulations) and cheaper and more buggy simulation. And according to the principle of self-sampling assumption, everyone must consider himself as a random representative of a set of creatures similar to himself, if he wants to get the most accurate probabilistic estimates.

Another possibility is that UFOs are deliberately launched into the Matrix to fool the people living in it and see how they will react to it. Because most simulations, I think, are designed to simulate the world in some special, extreme conditions.

Yet this hypothesis does not explain the entire set of specific manifestations of UFOs.
The risk here is that if our simulation gets overloaded with glitches, then the owners of the simulation might decide to reload it.

Finally, we can assume the "self-generation of the Matrix" - that is, that we live in a computing environment, but this environment spontaneously originated in some way at the origins of the existence of the universe without the mediation of any creator beings. In order to make this hypothesis more convincing, it should first be remembered that, according to one of the descriptions of physical reality, the elementary particles themselves are cellular automata - something like stable combinations in the game "Life". en.wikipedia.org/wiki/Life_(the game)

More works by Alexei Turchin:

About Ontol

Nick Bostrom: Are We Living in a Computer Simulation (2001)Ontol is a map that allows you to choose the most effective route for shaping your worldview.

Ontol is based on a superposition of subjective assessments, reflection of texts read (ideally, millions/billions of people). Each person participating in the project decides for himself what is the top 10/100 of the most important things he has read/watched on significant aspects of life (thinking, health, family, money, trust, etc.) over the past 10 years or all his life . What can be shared in 1 click (texts and videos, not books, conversations and events).

Ontol's ideal end result is 10x-100x faster access (than the existing analogues of wikipedia, quora, chats, channels, livejournal, search engines) to meaningful texts and videos that will affect the life of the reader (β€œOh, how I wish I read this text before! Most likely, life would have gone differently"). Free for all inhabitants of the planet and in 1 click.

Source: habr.com

Add a comment