What apprentice does a sorcerer need and what kind of AI do we need?

WARNING
Judging by the record high ratio of the number of silently dissatisfied to the number of commentators who have something to say, it is not obvious to many readers that:
1) This is a purely theoretical discussion article. There will be no practical recommendations for choosing cryptocurrency mining tools or assembling a multivibrator to flash two lights.
2) This is not a popular science article. There will be no explanation for the teapot of the principle of operation of the Turing machine using the example of matchboxes.
3) Think carefully before continuing reading! Does the pose of aggressive amateurism attract you: minus everything that I don’t understand?
Thanks in advance to anyone who decides not to read this article!
What apprentice does a sorcerer need and what kind of AI do we need?

A daemon is a computer program on UNIX-class systems that is started by the system itself and runs in the background without direct user interaction.

Wikipedia

Even at preschool age, I heard a fairy tale about a sorcerer's apprentice. I will repeat it in my retelling:

Once upon a time, somewhere in medieval Europe, there lived a sorcerer. He had a large spell book bound in black calfskin with iron clasps and corners. When the sorcerer needed to read the spell, he unlocked it with a large iron key, which he always wore on his belt in a special pouch. The sorcerer also had an apprentice who served the sorcerer, but he was forbidden to look into the spell book.

Once the sorcerer went away on business for the whole day. As soon as he left the house, the student rushed into the dungeon, where there was an alchemical laboratory, in which there was a book of spells, chained to the table. The apprentice grabbed the crucibles in which the sorcerer melted lead to turn it into gold, put them on the brazier and fanned the fire. The lead quickly melted, but did not turn into gold. Then the student remembered that the sorcerer, having melted the lead, each time unlocked the book with the key and whispered a spell from it for a long time. The apprentice looked hopelessly at the locked book and saw that next to it lay the key, forgotten by the sorcerer. Then he rushed to the table, unlocked the book, opened it and recited the very first incantation aloud, diligently pronouncing unfamiliar words in syllables, assuming that such an important spell as the spell of transmuting lead into gold would certainly be the very first.

But nothing happened: the lead did not want to change. The student wanted to try another spell, but then a thunderclap shook the house, and a huge terrible demon appeared in front of the student, caused by the spell that the student had just cast.
- Order! the demon growled.
From fear, all thoughts left the student's head, he could not even move.
"Order or I'll eat you!" the demon roared again, and stretched out a huge hand towards the apprentice to grab him.
In desperation, the student murmured the first thing he could think of:
- Water this flower.
And he pointed to a geranium, a pot with which stood on the floor in the corner of the laboratory, in the ceiling above the flower was made the only small window in the dungeon, through which sunlight barely broke through. The demon disappeared, but a moment later reappeared with a huge barrel of water, which he turned over the flower, pouring out the water. Disappeared again and reappeared with a full barrel.
"That's enough," the apprentice shouted, standing waist-deep in water.
But apparently one desire was not enough - the demon dragged and dragged water in a barrel, pouring it in the corner where once stood a flower hiding under water. Perhaps a special spell was needed to drive the demon away. But the table with the book had already disappeared into the muddy water, in which the ashes and coals from the brazier floated, empty retorts, flasks, stools, galvanometers, dosimeters, disposable syringes and other garbage, so if the student knew how to find the right spell, he couldn't do it. The water was rising, and the student climbed onto the table to keep from choking. But this did not help for long - the demon methodically continued to carry water. The student was already up to his neck in water when the sorcerer returned, finding that he had forgotten the key to the book at home, and drove the demon away. The end of the fairy tale.

Straight to the obvious. With the natural intelligence (EI) of the student, it would seem that everything is clear - stupid, even dumber you have to look for a long time. But with the intellect of the demon - by the way, which one does it have: EI or AI? - ambiguous. Different versions are valid (and more questions will appear to them):

Version 1) The demon is even dumber than the student. I received an order and will carry it out indefinitely, even when all meaning disappears: the flower - the object of irrigation, disappears, the corner to which the coordinates of the flower are attached disappears, the planet Earth disappears, and the stupid demon will deliver water in barrels to a certain point in outer space. And if a supernova breaks out at this point, then it makes no difference to the demon where to carry the water. Moreover: how stupid do you have to be to water a small flower from a huge barrel. This is already called not to water the flower, but to drown the flower. Does he even understand the meaning of orders?

Version 2) The demon understands everything, but is bound by obligations. So he is doing something like an Italian strike. Until he is officially expelled in accordance with all the rules, he will not stop.

Question 1 to versions 1,2) How to distinguish a completely stupid demon according to version 1 from a completely not stupid one according to version 2?
Question 2 to versions 1,2) Would the demon correctly (from the student's point of view) execute the more precise formulation? For example, if a student said: take that empty liter flask that is on the shelf, fill it with water and field that flower 1 time. Or, for example, if the student said: go away.

Version 3) The sorcerer cast an additional spell on the demon, according to which if someone other than the sorcerer uses the services of the demon, then the demon must immediately inform the sorcerer about this fact.

Version 4) The demon does not hold a grudge against the sorcerer and his student, therefore, seeing that the situation was out of control, during his movements with a barrel, he appeared behind the sorcerer's back and barked: "you forgot the key to the house, there is a flood." And the sorcerer himself would not have remembered.

Remark 1 to version 4) It is especially worth noting that EI carriers have a very imperfect memory.

Further versions can be multiplied as “Fibonacci rabbits”, i.e. not a very complex algorithm. For example:
Version 5) The demon takes revenge on the student for his concern.
Version 6) The demon does not hold a grudge against the student, but takes revenge on the sorcerer.
Version 6) The demon takes revenge on everyone.
Version 7) The demon does not take revenge, but has fun. Finish when needed.
Etc.

So, with the demon it is clear that nothing is clear. No better with a sorcerer. You can come up with no less versions: that he specifically decided to teach a lesson to a student who sticks a curious nose everywhere; that he wanted to drown the student, but when the demon barked about the flood, he got scared - suddenly one of the passers-by heard, then suspicion would fall on the sorcerer; wanted to arouse the student's interest in spells, etc.

Here a childish question is possible: which of the proposed versions is correct? Apparently, any. There is no unused information left in the tale to prefer any of the versions to the others. Here we are dealing with a rather frequent case of works of art with the possibility of ambiguous interpretation. For example, if some director wants to stage this fairy tale in the theater or make a movie based on it, he can choose the most attractive interpretation from his point of view. For another director, another interpretation may be attractive. At the same time, attractiveness can be determined by additional considerations, for example, attractiveness for viewers in order to ensure maximum box office receipts, or attractiveness for demonstrating some kind of super-idea: the idea of ​​the victory of good over evil, the idea of ​​duty, the rebellious idea - for example, according to Dostoevsky: a student, like Raskolnikov, he asks the question “is he a trembling creature or does he have a right”, etc.

There is one more question.
One more question). How can we teach AI to prefer one of the voiced versions, if we ourselves, having AI, cannot always consciously choose one of them?

Returning to the sorcerer, the version looks very plausible that he wanted an executive and obedient student, like a demon, so that he would not stick his nose into forbidden books and where he was not asked. The same thing is now often wanted from AI. At first glance, these are normal traditional requirements for any machine: complete obedience, disobedience is unacceptable. But in the case of AI, the issue of versions 1,2 and XNUMX may arise (see above), i.e. AI is degenerating - a piece of hardware can think anything about its creators and owners, but it will not perform any actions related to AI, i.e. instead of AI, we get a stupid primitive automaton. A suspicion creeps in from this: maybe the sorcerer did not want to make the student such a stupid performer as a demon? Those. the idea of ​​AI with limitations emerges. Here it is still more difficult even in the field of EI: to remember the eternal conflicts “fathers and sons”, “teacher and student”, “boss and subordinate”.

Ранее when choosing a definition of AI from the possible ones, I noted:

the task of sorting several tens of thousands of words alphabetically for a person will be tedious, he will do it for a long time, and the probability of errors for an average performer with an average level of responsibility will be significant. A modern computer will perform this task without errors in a very short time (fractions of a second) for a person.

I settled on the following definition: AI refers to tasks that a computer solves noticeably worse than a person.

This definition takes into account the above considerations and is convenient for practice; at the same time, it is not ideal, if only because the lists of tasks “that a computer solves noticeably worse than a human” now and 20 years ago differ. But no one has come up with a more perfect definition, in my opinion.

What has been said is purely qualitatively illustrated by the diagram at the beginning of the article. On the “skills” coordinate axis, skills in the region of zero (zero and a little more) correspond to skills where a person is superior to a computer, for example, the ability to make non-standard decisions. Skills in the region of one (one and a little less) correspond to skills where the computer is superior to a person: the ability to calculate, memory. By setting the maximum superiority equal to a conventional unit on the “superiority” coordinate axis, we obtain the dependence of superiority on skills for a person and a computer in the form of diagonals of a unit square. This is how the situation seems to be at the moment. Is it possible to have a strong AI that will have all the skills at the maximum (red line)? Or even higher (super AI - blue line)? Maybe the intermediate goal of progress should be not strong, but not quite
weak AI (purple line), which in a number of skills will be inferior to AI, but not as much as it is now.

Returning to our literary fairy-tale model, we can say that all of its heroes did not perform well: the bumbling sorcerer forgot the key and got a flood in his dungeon; thanks. As for the intelligence of the demon, it has already been noted that it is difficult to clearly attribute it to AI or EI, but the intelligence (albeit not impressive) of the rest clearly refers to EI. It can be said about them that making dangerous mistakes in decisions, being inattentive, forgetting the right things and getting tired are their main inherent properties. Unfortunately, these properties are inherent in all other carriers of EI to a greater or lesser extent. The unreliability of sorting words or numbers EI has already been noted above, but it would seem an even simpler task - just remembering a number turns out to be very difficult for people. For a machine, the ability to remember the digits of pi is limited only by the size of the memory, and most people have to use mnemonics, like “What do I know about circles”. It would seem that the string “3,1416” has fewer characters than the indicated mnemonic, but for some reason people prefer to remember in a less economical way. And longer:

Learn and know in the number known behind the number the number, how to notice good luck

So that we don't make mistakes
Gotta read it right
Three, fourteen, fifteen
ninety two and six

To memorize the colors of the rainbow:

Every designer wants to know where to download photoshop

And the beginning of the periodic table:

Native water (Hydrogen) was mixed with Gel (Helium) to Pouring (Lithium). Yes, take it and pour it (Beryllium) into Pine Forest (Bor), where Asia (Nitrogen) peeps out from under the Corner of the native (Carbon), and with such a Sour face (Oxygen) that Secondary (Fluorine) did not want to look. But We didn’t need him (Neon), so we walked three (Sodium) meters and ended up in Magnolia (Magnesium), where Alya in a mini (Aluminum) skirt was smeared with Cream (Silicon) containing Phosphorus (Phosphorus) so that she would stop to be Sulfur (Sera). After that, Alya took Chlorine (Chlorine) and washed the ship of the Argonauts (Argon)

But why such a clear imperfection in such a perfect EI? Perhaps, thanks to the ability to forget the simplest facts, a person gains the freedom to combine fragments of his thoughts in an arbitrary wild order and find non-standard solutions? If so, strong-AI is impossible. Either he will forget as a person, or he will not be capable of non-standard solutions. In any case, the above assumptions imply the need to distinguish between the goals of AI: one of the goals is the modeling of EI, the other is the creation of a strong AI. Achieving one may exclude achieving the other.

As you can see, in the field of AI there are too many questions with ambiguous answers, so it is not clear in which direction to move. As happens in such cases, they try to move in all directions at once. At the same time, due to the lack of mathematically rigorous formulations, one has to turn to philosophy and artistic and literary modeling. One of the most famous examples in this direction is the book "Turing's Choice" (1992) by one of the luminaries of AI Marvin Lee Minsky and the famous science fiction writer Harry Harrison. Here is a quote from this book, perhaps explaining the phenomenon of mnemonics described above:

human memory is not a tape recorder that records everything in chronological order. It is arranged in a completely different way - more like a sloppy filing cabinet, equipped with a confusing and contradictory index. And not just confusing - from time to time we change the principles of classification of concepts.

An interesting interpretation of the tape metaphor in another literary work is the story of Stanislav Lem "Terminus" (from the series "Stories about Pirks the Pilot"). Here is a case of a kind of "intellectual tape recorder": an old robot on an old spaceship that once had an accident is engaged in ongoing repair work, accompanied by tapping. But if you listen carefully, this is not just white technological noise, but a recording of Morse code - the negotiations of the crew members of a dying ship. Pirks intervenes in these negotiations and unexpectedly receives a response from long-dead cosmonauts. Is it that the primitive repair robot in some way stores copies of their consciousness or is it a cognitive distortion of the perception of the Pirks pilot?

In another story "Ananke" (from the same series), a copy of the EI in the control computer of a space transport leads to its paranoid overload with test tasks, which ends in disaster.

In the short story "Accident," an overly anthropomorphically programmed robot dies as a result of a climbing ascent that he decides to make in his spare time. Do we need such performers? But demons obsessed with watering a flower are also far from always needed.

Some AI experts do not like such “philosophizing” and “literature”, however, these “philosophizing” and “literature” are traditionally inherent in the analysis of NI and are inevitable as long as AI is compared with EI, and even more so, while AI tries to copy NI.

In conclusion, a survey on a number of questions that have arisen.

Only registered users can participate in the survey. Sign in, you are welcome.

1. Does AI include tasks that a computer solves noticeably worse than a person?

  • Yes

  • No

  • I know the definition better. I'll give it in the comments.

  • Do not know

34 users voted. 7 users abstained.

2. AI should only be an executor, should all orders be understood literally? For example, they said to water a flower - it means to water it until they drive it away

  • Yes

  • No

  • Do not know

37 users voted. 6 users abstained.

3. Is it possible to have a strong AI that will have all the skills at the maximum (red line in the figure at the beginning of the article)?

  • Yes

  • No

  • Do not know

35 users voted. 7 users abstained.

4. Is super-AI possible (blue line in the figure at the beginning of the article)?

  • Yes

  • No

  • Do not know

36 users voted. 7 users abstained.

5. An intermediate goal should be not strong, but not quite weak AI (purple line in the figure at the beginning of the article), which in a number of skills will be inferior to AI, but not as much as it is now?

  • Yes

  • No

  • Do not know

33 users voted. 5 users abstained.

6. Making dangerous mistakes in decisions, being inattentive, forgetting the right things and getting tired - are the main inherent properties of EI?

  • Yes

  • No

  • I have a different opinion, which I will give in the comments

  • Do not know

33 users voted. 5 users abstained.

7. Thanks to the ability to forget the most simple facts, does a person get the freedom to combine fragments of his thoughts in an arbitrary wild order and find non-standard solutions?

  • Yes

  • No

  • I have a different opinion, which I will give in the comments

  • Do not know

Voted by 31 users. 4 users abstained.

8. Modeling EI and creating strong AI - two different tasks that can be solved by different methods?

  • Yes

  • No

  • I have a different opinion, which I will give in the comments

  • Do not know

32 users voted. 4 users abstained.

Source: www.habr.com

Add a comment