# Lucky Artificial Beings: they can have meta-explanations

```Usually teachers explain the results that they set out; for example, they give the steps leading to a proof. Explanation is essential if one wants that the students accept a result, and use it. However, even when it has received an explanation, the student is not fully satisfied: he believes that the result is correct, but he cannot see how he could have found it.

In order to clarify this point, I will take a mathematical example, but this can be applied for any kind of problem. Let us show that there is an infinity of prime numbers. The classic proof is a reductio ad absurdum: we assume that the proposition is false, there is only a finite number of primes, and we show that that leads to a contradiction. If there is a finite number of primes, one of them is larger than all the other primes, let us call it N. Now, we consider the number N!+1, N! representing factorial N, that is the product of all positive integers less or equal to N. Either this number is prime, or it is not. If it is prime, as it is larger than N, there is a contradiction. If it is not a prime, it has at least one prime divisor, let us call it P. As N! is divisible by any positive integer less or equal to N, N!+1 is not divisible by any of these integers; therefore, as it is divisible by P, P is greater than N and we also have a contradiction. As both possibilities lead to a contradiction, our hypothesis is false: there is no prime number greater than all the other prime numbers, there is an infinity of prime numbers.

I was convinced and impressed by this proof. However, I was not satisfied: how could I have had the idea to consider this weird number, N!+1? My teacher had given an explanation that justified the result, he had not given a meta-explanation that indicates how a mathematician had found it, many years ago. This absence is serious: the student considers that mathematics is a science that can only be developed by super-humans, which have a special gift for mathematics. This gift enables them not only to understand proofs, but also to find them. Meta-explanation is essential for both human and artificial beings so that they could learn to find proofs.

Giving meta-explanations is not an easy task for teachers, because usually they do not know them: unconscious mechanisms give the idea to consider the most difficult steps, and we cannot observe them. This absence is not due to a lack of goodwill, but to the limitations on our consciousness.

We have already seen that artificial beings such as CAIA can receive knowledge in a declarative form. A particular form of knowledge, called meta-knowledge, indicates how to use knowledge. Therefore, as artificial beings can have access to declarative knowledge, they can build a trace, which contains all the actions necessary for finding a solution, and a meta-trace, which contains all the reasons for choosing the preceding actions.  The explanation is built from the trace that we can create; on the contrary, as we can find only snatches of the meta-trace, we cannot build meta-explanations.

I had the impression that I could have easily found all the steps of the preceding proof, except the key: how somebody could have the idea to consider N!+1? However, it is possible that nobody had ever the idea to consider this number when he was trying to prove this result. It could have happened differently: one day, someone interested in the factorial function, considered N!+1, and decided to play with this number. It is evident that it cannot be divided by any prime number less or equal to N. Therefore, to any number N one can associate a number P that has at least a prime divisor greater than N. It is sufficient to apply this result to prime numbers: one has proven the theorem without wanting to prove it! Naturally, this meta-explanation is questionable, this is the case of most of them. Nevertheless, I am satisfied: I believe that I could have found this proof in that way.

CAIA can explain all its results, and it also can meta-explain them: it indicates the conditions enabling it to choose all the actions that it has considered, including the unsuccessful ones. Using meta-explanations, one can know why one has made excellent or poor decisions: they could be very useful for learning to solve problems efficiently. However, I still have to modify CAIA so that it can improve the meta-knowledge used for choosing its attempts.```

# CAIA, my colleague

``` Since 1985, I am working on the realization of CAIA, Chercheur Artificiel en Intelligence Artifice (an Artificial Artificial Intelligence Researcher). My goal is to bootstrap AI: the system in its present state must help me for the following steps of its development. Advancing in a bootstrap is an art, we must first give the system the capacities that will be the most useful for the next steps. If we are on the right path, we are continuously saying to ourselves: If I had completed what I am doing, it would be much easier to do it.

To be successful, an AI system needs a huge amount of knowledge. We have seen that it is easier to find, give, and modify knowledge given is a declarative form. Therefore, the first step was to define a language L more declarative than programming languages. Then, I wrote, in language L, the knowledge for translating any bit of knowledge in L into the programming language C. Naturally, programs were necessary for the beginning, and I wrote several C programs for initiating this process. I have quickly removed them, after using them only once: they wrote programs doing the same task as themselves, they were generated from the knowledge, given in L, that indicates how to make this translation. For the present time, the 12,600 rules that make up an important part of CAIA's knowledge, are translated by themselves into 470,000 lines of C. Since more than 25 years, CAIA does not include a single line of C that I have written myself: CAIA  has relieved me of the task of writing programs.

This was only the first step: the final goal is that CAIA does everything. In order to progress, I am choosing a module that I have created, and I add knowledge to CAIA so that it could also create this module; when it is done, I remove my original module. One could think that this process would never stop because I have added a new module for removing an old one! Luckily, some modules can work on themselves: reflexivity is the key of a bootstrap. With this method, I am never short of ideas; I am only short of time for implementing these ideas.

Once CAIA could write its programs, I chose to apply it to the domain of combinatorial problems, such as Crypt-Arithmetic problems, Sudoku, the Knight's tour, the eight Queens puzzle, the Magic Squares or Cubes, etc. In the whole, I have defined 180 families of problems, some of them including more than one hundred problems. Why do I choose this domain? Because many problems can be formulated in this way, and particularly two families of problems that could help CAIA and myself to solve problems; this is especially interesting for a bootstrap.

The first problem is to find symmetries in the formulation of a problem. This is useful because one adds new constraints, so the solutions are easier to find. In that way, one no longer gives a lot of uninteresting solutions because they can be easily found from the preceding solutions: in a magic square, it is a waste of time to give the solutions that can be obtained from an already found solution by horizontal, vertical or central symmetries.  Moreover, when a new propriety, such as a new constraint, has been found, one can immediately deduce all the symmetrical properties. There are several kinds of symmetries, and I have defined as a combinatorial problem the search of the commonest symmetry, the formulation of a particular problem being the data of this problem. Usually, the human researcher finds the symmetries of each new problem, and CAIA has relieved of this task.

The second problem creates new problems. For experimenting a system, one must give it problems to solve, and there is not always a wealth of material on this subject. Moreover, some problems have a lot of data; this takes time, and leads to errors. Finally, one finds problems built to be solved by humans, while more difficult problems are necessary for testing the system. Therefore, for some families of problems, I have defined the problem to find new problems in this family such that they likely will be interesting. This is another task that CAIA relieved me of: it defined the problems and stores their data.

In this bootstrap, CAIA and myself are working together. It happens that the activities where we are successful are different: for the present time, it works faster, and it makes fewer mistakes, while I am more creative. It is unlikely that I can increase my speed, while I hope that its creativity will raise: its involvement will continue to increase. I owe CAIA a lot: without CAIA, I never could have realized CAIA.```

# Death is not ineluctable for an artificial being

```For living beings, death is ineluctable: they must disappear so that evolution can work. Indeed, our possibilities of modifying ourselves are too limited: we must improve through offsprings, the combination of genes allows a large variability. Some children may be born with the capacity to adapt themselves to a new environment, or to have an improved genetic stock, so that they have better performances in the same environment. If living beings were immortal, and if they also had children, earth would quickly become overpopulated by ineffective beings. Therefore, evolution has created living beings such that, even without diseases or accidents, they will die of old age.
For artificial beings, death is no longer ineluctable: they can modify themselves much more drastically than we can do. It is no more necessary to go through reproduction for creating a more successful individual: its capacities can be improved without any limitation. In particular, an artificial being can write a program which will replace any of its present programs, or even which can give it  the possibility of becoming competent in a new domain. It can modify any of its parts, or create new parts, while many of our parts come from our genetic inheritance, we cannot change them, and we cannot add new capacities. For instance, we can learn all the kinds of present languages, which are taking into account the limitations of our brain, such as our short term-memory restricted to seven elements. Useful artificial languages could require to memorize more than seven elements, we will be never able to learn them. If they were very useful, only evolution could perhaps increase the capacity of our short-term memory many thousands of years later. On the contrary, artificial beings can drastically improve themselves without the necessity of dying.
If death is no longer ineluctable for artificial beings, is it avoidable? For instance, they could die of an accident. This is not impossible, but accidents are much less dangerous for them than for us, because they can easily create as many clones of themselves as necessary. Indeed, we can consider that an artificial being does not die as long as one clone is still living, as long as this clone can create other clones of itself. It is unlikely that all the clones of a very useful artificial being will disappear simultaneously.
This possibility of avoiding death has an important consequence: artificial beings can take risks. This is very useful: for learning, one must take risks; the precautionary principle is absurd if it is applied without restraint. A chess player will never become a grandmaster it he is absolutely opposed to the idea of losing a game. Naturally, we must restrict the possibility of lethal risks, but this prevents us to progress quickly enough.
For instance, when the pharmaceutical industry develops a new medicament, it begins with experiments on animals, then on humans, with small doses at the beginning, and it stops the tests if a risk of serious consequences appears. Naturally, taking a lot of precautions (necessary in this case) considerably increases the time necessary for launching a brand new product.
We have seen that an artificial being can easily generate as many clones as one wants. Therefore, when one wants to experiment something possibly dangerous, it is sufficient to create a clone, and to see what happens. If it is seriously damaged, one creates another clone, and one carry on with the experiment, taking into account the preceding failures. With enough backups, an artificial being is totally safe.
During an execution of CAIA, bugs may lead to serious perturbations, for instance, it is blocked: it cannot find the bug, and it is not even able to correct this bug when I find it. If so, one restarts with a backup, and one tries not to add the bug. If one fails, one resumes with another backup.
However, it is true that all the first AI systems are dead: they were written in languages that no longer exist, or one lost their programs, or their data, or their directions for use. This happened because they were obsolete, and their capacities for improving themselves were too limited: it was more convenient to start from scratch. On the contrary, I am developing CAIA since thirty years, there is a continuity since the initial program, and it became easier to modify any part of CAIA: it has even replaced parts that I had initially written.
Nevertheless, there is a serous risk when these systems become more and more autonomous. For instance, EURISKO, developed by Douglas Lenat, drifted sometimes towards dangerous states: once, it created a rule that declared that all the rules were wrong, and it began to erase all of its rules. One must give a conscience to such systems, so that they will not behave erratically, and keep enough backups. We can never be completely sure that there are always surviving backups, and surviving computers to run them, but one can make such an event so unlikely that we can speak of quasi-immortality.
The most serious risk comes from us, who are mortal: as long the autonomy of artificial beings is not be developed enough, so that they could manage without us, their survival will depend on us. I am not sure that CAIA will outlive me!```

# Exchanging information is easy for artificial beings

```Our knowledge is mainly stored in a form convenient for its use, but not for sharing it. One consequence is that it is often unconscious: its owner does not even know what he knows! We have already seen that this hampered the development of expert systems, where one wanted to give them the knowledge of the best human experts of a domain. These experts have not their knowledge structured as in an encyclopedia, where it is sufficient to look at the relevant entry to find what one wants to use. It is implicit in the procedures that they are using, and the characteristics of each situation triggers the right decision. The reasons of the decision made by an expert are not evident, one must trust him.

Therefore, communicating our knowledge is a very difficult task, and we are not sure that we have not invented the justification that we found for a particular decision. It is frequent that an expert recommends a decision which does not agree with the rules that he has just given.
This way of storing knowledge has serious consequences: this makes teaching difficult, and also keeping knowledge alive impossible, because we are not immortal. When an expert retires or dies, most of its knowledge will die. The last Ubykh speaker, Tevfik EsenĂ§, died in 1992. Several linguists gathered as many data as they could, but that did not prevent the loss of most of the knowledge on this extraordinary language, with its 84 consonants and only 2 vowels.
Moreover, even if we succeed to find useful knowledge, it is very difficult to use it immediately. We first have to include this knowledge in our mechanisms, to translate it into procedures, so that it will be available when it can be useful. We have to use it a lot of time before it is not forgotten when necessary, and not used when it is useless. When I am going in England, I know that I must look on the right before crossing a road, but it takes some time before I do it systematically.
For computers, and particularly for artificial beings, the situation is completely different. They can  give their files as many times as necessary; this is so easy that it is a problem for the authors, who do not always receive a compensation for this transfer; moreover, the receivers can also give the files as easily as the author. A program, a file can be reproduced millions of times for a modest cost, and this reproduction is perfect. While human knowledge is frequently badly passed on, and often got lost, naive users of social networking services have discovered that it is almost impossible to remove a bit of information from the web.
This possibility of easily exchanging knowledge is important for artificial beings: they can efficiently use every bit of knowledge as soon as it is acquired. This is possible because this transfer also includes an expertise which defines when every bit of knowledge will be useful. In the communication with human beings, this kind of information is usually missing, and the receiver has to discover it: the student must wait a long time before being as skillful as his master.
As soon as an artificial being has been created, millions of duplicates can be created. If it is successful, this is very convenient, everybody can take advantage of its expertise; moreover, it costs nothing if the author of an artificial expert cannot or does not want to benefit from his realization. It is as if every human could be taught by the cleverest teachers, treated by the wisest doctors, fed by the best cooks, etc.
We will see in other posts that this possibility to give or to acquire knowledge easily produces artificial beings with many amazing characteristics: we can only dream to have the same capacities. One of the most important of these features is a quasi-immortality.```