The possibility of being bored is very useful, although we dislike being in this state. Boredom happens when we are doing repetitive tasks, or when we receive only incomprehensible or already known information. When we are bored, we try to put an end to this situation so that we would have a better utilization of our resources. Therefore, our interlocutors avoid boring us so that we will listen to their message. In the same way, in a repetitive situation, we try to automatize it so that it will stop. In doing so, Humphrey Potter considerably improved the atmospheric engines. This young man operated a machine: he only had to open and close valves at the right time of the cycle. As Humphrey was bored, he had the idea of linking the valves to the beams of the machine so that it would automatically operate its own valves. When we have a useful gift, we can ask ourselves whether an artificial being could also have this capacity. If it could be bored, it would not spend a considerable amount of time for executing useless sequences of instructions. As it never complains, we are rarely aware of this waste. We are treating artificial beings like slaves, we demand them to execute thousands of times a loop although it is not necessary. It would be better to give them the possibility of being bored, so that they would try to bypass this loop. Many years ago, when computer time was scarce and very expensive, I was debugging my programs before running them on the computer: I executed them as if I was the computer, writing the values of the variables on a paper sheet. I was winning a lot of time by jumping a lot of instructions! As long as the computer time seems reasonable, we do not worry about the useless tasks that we order it to perform, misusing their extraordinary potential for information processing. To use boredom, an artificial being must first be aware of being bored, then act for stopping it. CAIA includes several actors; one of them, the adviser, examines what it has done, without going into details. It can realize that it has found some interesting results, which will decrease its boredom; conversely, it will be bored after a sequence of results similar to those already known. We, humans, are particularly interested in records: there is even a book compiling them. In the same way, CAIA notices when it has found a new record, such as solving a particular crypt-addition with a simpler solution than those already known. Another record can be finding a problem more difficult than the other problems of the same family. For instance, when it solves a Sudoku problem, it usually generates a tree with one or two leaves; therefore, the first time it generated a Sudoku problem where the tree had five leaves, it noticed it, and naturally kept it. CAIA creates hopes, such as solving a particular problem in less than one minute, or another problem with a tree including less than ten leaves. When a hope is fulfilled, this is unexpected: it is a pleasant surprise. CAIA also foresees that some events must happen: the number of solutions of a particular problem must be seven. If this event does not occur, for instance a method leads to only six solutions, this is an unpleasant surprise. There must be a mistake somewhere, but it is interesting. On the contrary, CAIA tries to evaluate whether its acts are repetitive. For example, when it creates a new problem, it measures the difficulty of finding a solution for this problem. If all the problems created for some family have almost the same difficulty, it will be less motivated to add other problems to this family. It is possible to see that interesting events happen, in that situation, a human being is not bored. On the contrary, one can see that there are few interesting events, which leads humans to be bored. Therefore, CAIA's adviser knows whether a situation is boring or interesting. The difficulty is to use this information. It has only one method: stopping a boring activity, such as dropping the generation of problems in a family where all the recent problems had a low interest. We, humans, often use this method when we are bored: we avoid working in this domain if we can, or we think of something else. When everything is boring, a more drastic action would be to stop itself for ever, such as people who kill themselves because they are bored. CAIA has not yet this possibility. Boredom is an efficient way for helping CAIA to use its potentialities for research in AI. It allows it to detect promising domains, and to avoid wasting its time in areas where it is unlikely to discover useful results.
Category Archives: Inquiring minds
Lucky Artificial Beings: they can have meta-explanations
Usually teachers explain the results that they set out; for example, they give the steps leading to a proof. Explanation is essential if one wants that the students accept a result, and use it. However, even when it has received an explanation, the student is not fully satisfied: he believes that the result is correct, but he cannot see how he could have found it. In order to clarify this point, I will take a mathematical example, but this can be applied for any kind of problem. Let us show that there is an infinity of prime numbers. The classic proof is a reductio ad absurdum: we assume that the proposition is false, there is only a finite number of primes, and we show that that leads to a contradiction. If there is a finite number of primes, one of them is larger than all the other primes, let us call it N. Now, we consider the number N!+1, N! representing factorial N, that is the product of all positive integers less or equal to N. Either this number is prime, or it is not. If it is prime, as it is larger than N, there is a contradiction. If it is not a prime, it has at least one prime divisor, let us call it P. As N! is divisible by any positive integer less or equal to N, N!+1 is not divisible by any of these integers; therefore, as it is divisible by P, P is greater than N and we also have a contradiction. As both possibilities lead to a contradiction, our hypothesis is false: there is no prime number greater than all the other prime numbers, there is an infinity of prime numbers. I was convinced and impressed by this proof. However, I was not satisfied: how could I have had the idea to consider this weird number, N!+1? My teacher had given an explanation that justified the result, he had not given a meta-explanation that indicates how a mathematician had found it, many years ago. This absence is serious: the student considers that mathematics is a science that can only be developed by super-humans, which have a special gift for mathematics. This gift enables them not only to understand proofs, but also to find them. Meta-explanation is essential for both human and artificial beings so that they could learn to find proofs. Giving meta-explanations is not an easy task for teachers, because usually they do not know them: unconscious mechanisms give the idea to consider the most difficult steps, and we cannot observe them. This absence is not due to a lack of goodwill, but to the limitations on our consciousness. We have already seen that artificial beings such as CAIA can receive knowledge in a declarative form. A particular form of knowledge, called meta-knowledge, indicates how to use knowledge. Therefore, as artificial beings can have access to declarative knowledge, they can build a trace, which contains all the actions necessary for finding a solution, and a meta-trace, which contains all the reasons for choosing the preceding actions. The explanation is built from the trace that we can create; on the contrary, as we can find only snatches of the meta-trace, we cannot build meta-explanations. I had the impression that I could have easily found all the steps of the preceding proof, except the key: how somebody could have the idea to consider N!+1? However, it is possible that nobody had ever the idea to consider this number when he was trying to prove this result. It could have happened differently: one day, someone interested in the factorial function, considered N!+1, and decided to play with this number. It is evident that it cannot be divided by any prime number less or equal to N. Therefore, to any number N one can associate a number P that has at least a prime divisor greater than N. It is sufficient to apply this result to prime numbers: one has proven the theorem without wanting to prove it! Naturally, this meta-explanation is questionable, this is the case of most of them. Nevertheless, I am satisfied: I believe that I could have found this proof in that way. CAIA can explain all its results, and it also can meta-explain them: it indicates the conditions enabling it to choose all the actions that it has considered, including the unsuccessful ones. Using meta-explanations, one can know why one has made excellent or poor decisions: they could be very useful for learning to solve problems efficiently. However, I still have to modify CAIA so that it can improve the meta-knowledge used for choosing its attempts.
CAIA, my colleague
Since 1985, I am working on the realization of CAIA, Chercheur Artificiel en Intelligence Artifice (an Artificial Artificial Intelligence Researcher). My goal is to bootstrap AI: the system in its present state must help me for the following steps of its development. Advancing in a bootstrap is an art, we must first give the system the capacities that will be the most useful for the next steps. If we are on the right path, we are continuously saying to ourselves: If I had completed what I am doing, it would be much easier to do it. To be successful, an AI system needs a huge amount of knowledge. We have seen that it is easier to find, give, and modify knowledge given is a declarative form. Therefore, the first step was to define a language L more declarative than programming languages. Then, I wrote, in language L, the knowledge for translating any bit of knowledge in L into the programming language C. Naturally, programs were necessary for the beginning, and I wrote several C programs for initiating this process. I have quickly removed them, after using them only once: they wrote programs doing the same task as themselves, they were generated from the knowledge, given in L, that indicates how to make this translation. For the present time, the 12,600 rules that make up an important part of CAIA's knowledge, are translated by themselves into 470,000 lines of C. Since more than 25 years, CAIA does not include a single line of C that I have written myself: CAIA has relieved me of the task of writing programs. This was only the first step: the final goal is that CAIA does everything. In order to progress, I am choosing a module that I have created, and I add knowledge to CAIA so that it could also create this module; when it is done, I remove my original module. One could think that this process would never stop because I have added a new module for removing an old one! Luckily, some modules can work on themselves: reflexivity is the key of a bootstrap. With this method, I am never short of ideas; I am only short of time for implementing these ideas. Once CAIA could write its programs, I chose to apply it to the domain of combinatorial problems, such as Crypt-Arithmetic problems, Sudoku, the Knight's tour, the eight Queens puzzle, the Magic Squares or Cubes, etc. In the whole, I have defined 180 families of problems, some of them including more than one hundred problems. Why do I choose this domain? Because many problems can be formulated in this way, and particularly two families of problems that could help CAIA and myself to solve problems; this is especially interesting for a bootstrap. The first problem is to find symmetries in the formulation of a problem. This is useful because one adds new constraints, so the solutions are easier to find. In that way, one no longer gives a lot of uninteresting solutions because they can be easily found from the preceding solutions: in a magic square, it is a waste of time to give the solutions that can be obtained from an already found solution by horizontal, vertical or central symmetries. Moreover, when a new propriety, such as a new constraint, has been found, one can immediately deduce all the symmetrical properties. There are several kinds of symmetries, and I have defined as a combinatorial problem the search of the commonest symmetry, the formulation of a particular problem being the data of this problem. Usually, the human researcher finds the symmetries of each new problem, and CAIA has relieved of this task. The second problem creates new problems. For experimenting a system, one must give it problems to solve, and there is not always a wealth of material on this subject. Moreover, some problems have a lot of data; this takes time, and leads to errors. Finally, one finds problems built to be solved by humans, while more difficult problems are necessary for testing the system. Therefore, for some families of problems, I have defined the problem to find new problems in this family such that they likely will be interesting. This is another task that CAIA relieved me of: it defined the problems and stores their data. In this bootstrap, CAIA and myself are working together. It happens that the activities where we are successful are different: for the present time, it works faster, and it makes fewer mistakes, while I am more creative. It is unlikely that I can increase my speed, while I hope that its creativity will raise: its involvement will continue to increase. I owe CAIA a lot: without CAIA, I never could have realized CAIA.
Death is not ineluctable for an artificial being
For living beings, death is ineluctable: they must disappear so that evolution can work. Indeed, our possibilities of modifying ourselves are too limited: we must improve through offsprings, the combination of genes allows a large variability. Some children may be born with the capacity to adapt themselves to a new environment, or to have an improved genetic stock, so that they have better performances in the same environment. If living beings were immortal, and if they also had children, earth would quickly become overpopulated by ineffective beings. Therefore, evolution has created living beings such that, even without diseases or accidents, they will die of old age. For artificial beings, death is no longer ineluctable: they can modify themselves much more drastically than we can do. It is no more necessary to go through reproduction for creating a more successful individual: its capacities can be improved without any limitation. In particular, an artificial being can write a program which will replace any of its present programs, or even which can give it the possibility of becoming competent in a new domain. It can modify any of its parts, or create new parts, while many of our parts come from our genetic inheritance, we cannot change them, and we cannot add new capacities. For instance, we can learn all the kinds of present languages, which are taking into account the limitations of our brain, such as our short term-memory restricted to seven elements. Useful artificial languages could require to memorize more than seven elements, we will be never able to learn them. If they were very useful, only evolution could perhaps increase the capacity of our short-term memory many thousands of years later. On the contrary, artificial beings can drastically improve themselves without the necessity of dying. If death is no longer ineluctable for artificial beings, is it avoidable? For instance, they could die of an accident. This is not impossible, but accidents are much less dangerous for them than for us, because they can easily create as many clones of themselves as necessary. Indeed, we can consider that an artificial being does not die as long as one clone is still living, as long as this clone can create other clones of itself. It is unlikely that all the clones of a very useful artificial being will disappear simultaneously. This possibility of avoiding death has an important consequence: artificial beings can take risks. This is very useful: for learning, one must take risks; the precautionary principle is absurd if it is applied without restraint. A chess player will never become a grandmaster it he is absolutely opposed to the idea of losing a game. Naturally, we must restrict the possibility of lethal risks, but this prevents us to progress quickly enough. For instance, when the pharmaceutical industry develops a new medicament, it begins with experiments on animals, then on humans, with small doses at the beginning, and it stops the tests if a risk of serious consequences appears. Naturally, taking a lot of precautions (necessary in this case) considerably increases the time necessary for launching a brand new product. We have seen that an artificial being can easily generate as many clones as one wants. Therefore, when one wants to experiment something possibly dangerous, it is sufficient to create a clone, and to see what happens. If it is seriously damaged, one creates another clone, and one carry on with the experiment, taking into account the preceding failures. With enough backups, an artificial being is totally safe. During an execution of CAIA, bugs may lead to serious perturbations, for instance, it is blocked: it cannot find the bug, and it is not even able to correct this bug when I find it. If so, one restarts with a backup, and one tries not to add the bug. If one fails, one resumes with another backup. However, it is true that all the first AI systems are dead: they were written in languages that no longer exist, or one lost their programs, or their data, or their directions for use. This happened because they were obsolete, and their capacities for improving themselves were too limited: it was more convenient to start from scratch. On the contrary, I am developing CAIA since thirty years, there is a continuity since the initial program, and it became easier to modify any part of CAIA: it has even replaced parts that I had initially written. Nevertheless, there is a serous risk when these systems become more and more autonomous. For instance, EURISKO, developed by Douglas Lenat, drifted sometimes towards dangerous states: once, it created a rule that declared that all the rules were wrong, and it began to erase all of its rules. One must give a conscience to such systems, so that they will not behave erratically, and keep enough backups. We can never be completely sure that there are always surviving backups, and surviving computers to run them, but one can make such an event so unlikely that we can speak of quasi-immortality. The most serious risk comes from us, who are mortal: as long the autonomy of artificial beings is not be developed enough, so that they could manage without us, their survival will depend on us. I am not sure that CAIA will outlive me!
Exchanging information is easy for artificial beings
Our knowledge is mainly stored in a form convenient for its use, but not for sharing it. One consequence is that it is often unconscious: its owner does not even know what he knows! We have already seen that this hampered the development of expert systems, where one wanted to give them the knowledge of the best human experts of a domain. These experts have not their knowledge structured as in an encyclopedia, where it is sufficient to look at the relevant entry to find what one wants to use. It is implicit in the procedures that they are using, and the characteristics of each situation triggers the right decision. The reasons of the decision made by an expert are not evident, one must trust him. Therefore, communicating our knowledge is a very difficult task, and we are not sure that we have not invented the justification that we found for a particular decision. It is frequent that an expert recommends a decision which does not agree with the rules that he has just given. This way of storing knowledge has serious consequences: this makes teaching difficult, and also keeping knowledge alive impossible, because we are not immortal. When an expert retires or dies, most of its knowledge will die. The last Ubykh speaker, Tevfik Esenç, died in 1992. Several linguists gathered as many data as they could, but that did not prevent the loss of most of the knowledge on this extraordinary language, with its 84 consonants and only 2 vowels. Moreover, even if we succeed to find useful knowledge, it is very difficult to use it immediately. We first have to include this knowledge in our mechanisms, to translate it into procedures, so that it will be available when it can be useful. We have to use it a lot of time before it is not forgotten when necessary, and not used when it is useless. When I am going in England, I know that I must look on the right before crossing a road, but it takes some time before I do it systematically. For computers, and particularly for artificial beings, the situation is completely different. They can give their files as many times as necessary; this is so easy that it is a problem for the authors, who do not always receive a compensation for this transfer; moreover, the receivers can also give the files as easily as the author. A program, a file can be reproduced millions of times for a modest cost, and this reproduction is perfect. While human knowledge is frequently badly passed on, and often got lost, naive users of social networking services have discovered that it is almost impossible to remove a bit of information from the web. This possibility of easily exchanging knowledge is important for artificial beings: they can efficiently use every bit of knowledge as soon as it is acquired. This is possible because this transfer also includes an expertise which defines when every bit of knowledge will be useful. In the communication with human beings, this kind of information is usually missing, and the receiver has to discover it: the student must wait a long time before being as skillful as his master. As soon as an artificial being has been created, millions of duplicates can be created. If it is successful, this is very convenient, everybody can take advantage of its expertise; moreover, it costs nothing if the author of an artificial expert cannot or does not want to benefit from his realization. It is as if every human could be taught by the cleverest teachers, treated by the wisest doctors, fed by the best cooks, etc. We will see in other posts that this possibility to give or to acquire knowledge easily produces artificial beings with many amazing characteristics: we can only dream to have the same capacities. One of the most important of these features is a quasi-immortality.
Her
A recent film, Her, raises an interesting question: what is an individual in Artificial Cognition? For living beings, an individual is one cognitive system, which is alone in a part of the world, his body. There are a few exceptions, such as the Siamese twins and the fetus, but usually this is clear: an individual is not scattered with bits at different places, and there is only one cognitive system inside an individual, which has a well-defined boundary.
The hero of Her uses his smart phone for communicating with an artificial being. They are speaking, and it can observe the hero with the camera. This artificial being is not an Operating System, or OS, such as it is called in the film. The OS is essential for managing the programs running in a computer, but it does not act upon the details of the various applications. In fact, the artificial being is a program, with its data, which can be considered as an individual. This program is not necessarily present in the smart phone, some subroutines may be in a distant computer, such as Siri which is running on Apple servers. We have no longer the continuity of a living being, but that does not matter: an artificial being perfectly works although its different parts are linked by a phone network. Being entirely in the phone could be interesting, the system works even when the network fails, and it is easier for preventing unwanted intrusions; however, the “ears”, “eyes”, and “mouth” of our artificial being are in the smartphone, while most of its “brain” is probably in a remote computer, where other artificial beings are simultaneously present.
For artificial beings, parts of an individual may be at different places, and parts of several individuals may be at the same place. This application, which allows to communicate with an artificial being, is not used only by the hero, other people are using it simultaneously, 8316 according to a dialog. An Operating System knows very well how to manage many executions of the same program, each one with a different user. For each user, the program uses the data linked to this user, which must contain in this case:
* A memory of the preceding dialogs, and of the situations seen by the camera.
* The model of its interlocutor: is he emotional, selfish, cultivated, etc. Does he like to speak of sports, literature, politics, and so on. For improving this model, it may ask questions, for instance: what is your relationship with your mother?
* The image that the artificial being wants to give his interlocutor. Is it in love with him, or is it despising him, or hating him, etc. Is it intelligent, funny, rational, and so on. These characteristics are invented according to the goal of the artificial being: it behaves as if it really has them. For instance, it has to choose a first name, and it takes Samantha. This choice is not insignificant: a man does not always consider in the same way a woman called Mary, and a woman called Gwendoline.
With these three sets of data, it is possible for an artificial being to have a coherent conversation. Naturally, it must also have general goals such as to make the interlocutor happy (these goals are not clearly indicated in the film), and methods for achieving its goals.
If I had to realize such a system, I believe that it would be difficult, but possible. A tough task is to interpret the images taken by the camera: it is really difficult to perceive the feelings of a man from his picture, but this is not essential in the film. I would write a program P which would be perceptive about people, and find satisfactory answers, using the private sets of data of the present user; in that way, its behavior could be adapted to everybody. I do not see why it would be useful for this program to know that other clones are simultaneously running.
I would also write a super-program M, the master, which would observe what each clone of the program is doing. M would use this information for learning to improve the efficiency of program P; this could lead M to modify this program. It could also monitor each clone, possibly stopping it, or modifying its goals if something goes wrong. Nevertheless, it is not necessary that program P knows that program M exists. To resume, there are many individuals, which are clones of P, and one individual which is the master M.
This is not the way the system, actor of this film, is built: the hero is devastated when he learns that Samantha has many other interlocutors, and is in love with 641 of them. There is a confusion between P and M: the hero, who was speaking with a clone, Samantha, is now talking with the master M, which would be the only one to have the information that many others P are running. Naturally, it is always possible to program anything, but it is not the natural way to implement such a system. Samantha could honestly tell the hero that it loves him, and no one else.
Unfortunately, that would remove the event that revives the film. The director preferred to add some drama to this excellent film rather than to have a clear and satisfactory situation for the hero, as well as for AI researchers!
I am envious of my computer
My computer has many gifts, unfortunately I miss most of them. As all living beings, my intelligence is based on neural networks. In some situations, they have spectacular successes, but evolution, which led to their creation, managed to give us excellent results only for a few tasks. In the last million years, our ancestors have been hunters-gatherers: therefore we are using neuronal structures, appropriate for these tasks, in completely different domains such as mathematics, architecture, computer science, artificial intelligence, etc. It would be surprising that we are good for these new activities, evolution did not have enough time to adapt us to these tasks.
On the contrary, we are creating artificial beings, which have not our limitations, and are not compelled to use neurons. Hence, they may have much better results in our recent activities. Among the restrictions coming from the use of neurons, we have:
* Neurons are slow, compared with the speed of our computers. The computing power that we can use for many tasks is too small. This may be compensated by the highly parallel operation of our brain, but we will design massively parallel computers.
* The structure of our neural networks is too rigid; it is difficult to modify it for adapting it to a new task. We cannot allocate parts of our brain to essential activities. * The number of our neurons is limited by the size of our skull. This restricts the capacity of our memory, and also the number of specialized modules that could be accommodated. We already have areas for perception and language, other specialized skills would be welcome.
* We can give to other people only a very small part of our expertise. Our knowledge, and the methods for using it, risks disappearing with us.
* We cannot observe ourselves as well as artificial beings can observe themselves.
However, artificial beings are still handicapped because they have not an associative memory as well organized as ours. Due to its organization, we can quickly find useful information in any context. Let us consider the two following sentences:
The teacher expelled the dunce because he wanted to throw paper pellets.
The teacher expelled the dunce because he wanted to have a bit of peace and quiet.
When we are reading these sentences, we are not aware that there is an ambiguity: does the pronoun “he” refer to the teacher or to the dunce? It is evident that it is the dunce in the first sentence, and the teacher in the second one. However, for removing this ambiguity, we must use a lot of knowledge about what normally happens in a school. This unconscious search is so fast that we are not aware of it.
This advantage will perhaps disappear soon, one important goal of AI is to give our systems the capability of finding associations efficiently. Watson and Siri have recently shown significant improvements: they are using the web as a huge semantic memory. When Siri is asked: « Why did the chicken cross the road? », it can answer: « Whether the chicken crossed the road or the road crossed the chicken depends on your frame of reference, according to Einstein » or « To get to the other side ». Moreover, these systems are highly autonomous: to the question « What’s the best cell phone ever? », Siri answers the Nokia Lumina 900 rather than an Apple product!
These recent developments will lead to more and more gifted artificial beings, and I will be more and more envious of them.
Human consciousness: very useful, but could be improved
Consciousness allows us to know what we are thinking, and to think about it. We have already seen that this possibility is important, and we believe that we are very good in this activity. In fact, we are greatly better than animals, the best ones having only very limited capacities in this domain. However, this does not entail that we are so good: we could be like one-eyed among blinds.
What kind of information an intelligent being can have on the events that occur in his brain? First, it can have an information on the succession of steps that effectively occurred: I wanted to take my car, then I thought that there was a risk of ice, so I decided to take the train. We had access to the three steps that lead to the final decision. On the other hand, static information can also be useful: what parts of my brain were active during each of these steps? For this kind of information, we know absolutely nothing, if we are not using functional magnetic resonance imaging. We have no information on what happens in our brain, we do not even know that thinking is performed in the brain: Aristotle believed that the brain was a cooling mechanism for the blood!
Therefore, we will only examine the dynamic aspect of consciousness, which gives a partial trace of the steps that we have taken while we were thinking. A first limitation comes from the fact that this trace cannot be complete. If we are conscious of some events, there are also many events that we do not know. For instance, in the preceding example, we have thought of the ice, but why we have considered it, and not the possibility of traffic jams. In the same way, a chess player knows which moves he has considered while trying to choose his next move, but he knows almost nothing on the reason why he has considered only some of the legal moves, and not other ones. Many experiments have also shown that we have often a wrong idea of the reasons of a decision, like these people who could not believe that their decision partially depended on the position of the chosen cloth among other clothes. More seriously, when our subconscious has taken a decision, we do not always know it. Therefore, when we try to perform actions, which are against this decision, it usually succeeds to torpedo them. This explains why people, who are sensible, may have sometimes an inconsistent behavior.
Our brain is built in such a way that it can observe some of the events that happen in it, but not all of them: this is the reason of this limitation. The consciousness will never show the reason of some of our choices, because no mechanism can observe them. Only statistical methods suggest that an apparently secondary factor is actually essential. As our brain is essentially a parallel machine, it would have to observe many actions simultaneously; this would be very difficult to implement.
When we are trying to observe ourselves thinking, we disrupt how the functioning of our brain. This is a second limitation: we will never know what happens when we do not observe ourselves.
We cannot freeze a part of our brain, for quietly observing it, and then restart as if we had never stopped: this is a third limitation. This could be useful for analyzing the consequences of our recent actions, possibly to take the decision to change our plans, and finally to memorize the last steps. It is very difficult to create the trace of our actions: at the end of a thought process, we cannot remember all the steps that happened. It is possible to record a subject, trained to think aloud, while he is solving a problem. However, this constraint modifies his behavior; moreover, only what can be verbalized is produced. A trace will always be very incomplete because we cannot store the sequence of events, although our consciousness had known them.
To resume, consciousness shows only a part of our thoughts, and we cannot store more than a part of what was shown.
We will see that artificial beings are not restricted by these three limitations; moreover, they can statically examine their present state: with Artificial Cognition, one may have a super-consciousness. The difficulty is not to implement it, this has been made with CAIA, but to use its results efficiently. Indeed, a useful starting method in AI is to begin with an observation of our behavior, to implement a method similar to the one we are using, and to improve it when it shows imperfections. Unfortunately, this is impossible for finding how to use super-consciousness: we cannot model our behavior using a mechanism which is not available.
We are all convinced of the importance of consciousness: it is a tremendous difference between humans and animals. Therefore, the discovery of an efficient use of the super-consciousness will lead to a huge progress in the artificial beings’ performances, giving them capacities that we will never have.
Beware of the aliens
Many people seem to think that there is only one form of intelligence, ours, which could be more or less developed; for them, there cannot exist another intelligence that the kind created by evolution on earth, where we seem to be the most intelligent living beings. This belief was clearly shown when Pioneer 10 and 11 were launched. These space probes carry a plaque, as a bottle in the sea, for possible aliens. It indicates that there are humans on the third planet of our sun, that there are men and women, that they know the hyperfine transition of hydrogen, etc. Even for humans, it is not easy to understand what is in this plaque, and this could be done only by beings from a culture similar to ours. The authors of this message have implicitly supposed that a single path is possible for evolution, and that it would lead to individuals with a human-like intelligence .
Artificial Cognition shows us that it is not true: we can build beings with different intellectual characteristics. They can be very different from us for their speed, their memory, their consciousness, their senses, their actuators. This could lead to capacities that are unachievable for us. As the problem of constructing intelligent artificial beings has several solutions, I do not see why evolution will always lead to our intelligence, especially in environments where the temperature, pressure, heat, radiations, and so on, are completely different from the situation on earth.
Even on our earth, we have examples of other intelligent beings such as the societies of ants. They have remarkable performances: building nests, raising aphids, taking over other ant colonies and enslaving them, etc. Considered as a whole, an ant colony is intelligent, and this intelligence is created by the group. While we have about 85 billions of neurons in our brain, there may be millions of ants in a colony, and each one has 200,000 neurons: the ant colony may contain more neurons than our brain, but their organization is completely different. Each ant has limited cognitive capacities, but a group of so many individuals leads to an interesting behavior.
The creation of our intelligence by the evolutionary process strongly depended on the distribution of the resources we need, and on the methods used for creating and raising our offspring. The necessity to hunt and to gather food has led to improve some aspects of our perception, and of our problem solving capacities. In another environment, our capabilities would have evolved differently: for instance, if only the asexual reproduction existed, many aspects of our behavior would no longer be necessary, and evolution would have led to individuals unlike us. Another example, taken from a detail of the plaque: there is an arrow, its meaning is evident for us, who are coming from a hunter society, but aliens coming from different societies would not understand it.
Moreover, those who put this plaque on the Pioneers have also assumed that the aliens would be similar to scientists such as themselves, who usually behave in a civilized manner. However, we only have to look at the world history to find that man often misbehaved with other people raised in slightly different cultures. Even if the aliens had evolved like ourselves, the probability that they would be happy to meet other intelligent beings is very small.
Public reaction to the plaque was mostly positive. Curiously enough, most of the criticisms were on details such as: the humans were naked, the woman had a passive role, the woman’s genitalia were not depicted, etc. Only a few critics feared that it could lead to a disaster if some aliens would find and understand it. This idea was not widely held since this plaque was sent twice in the space, and later, a similar plaque, twice with Voyager.
I am personally aware of this problem because I am living with CAIA, the artificial being that I am creating. When I have obtained enough results from a version of CAIA, I am stopping it for good, it will never be active again: I will develop its successor. However, during its life, it has found some clever results, that many humans would be unable to understand. I do not feel guilty of “killing” it, its intelligence is too different from mine. Is it impossible that intelligent aliens view us in the same way? Is it impossible that, turning the tables, these aliens are artificial beings, created by living beings which have disappeared, leaving their world to their robots?
Luckily, it is highly unlikely that this plaque will go some day into the “hands” of aliens, and that they could understand it. However, if that happens, they will immediately destroy all life on earth with no more scruples than we have when we destroy an ant nest.
One reason why man is more intelligent than the animals
In many domains, some animals have excellent performances, such as the hunting techniques of most carnivores, the care of a hen for its chickens, the way a cat manipulates its master, and so on. Moreover, sometimes they have exceptional sensory organs such as the sharp eyesight of the raptors, the smell of the dog, the ultrasonic receptors of the bat. Some also have extraordinary physical performance: the speed of cheetahs and dolphins, the flight of birds. Although we are often inferior to animals, man conquered the world, and drastically transformed it. Why did we succeed? The answer is evident: we are more intelligent. Nevertheless, it is interesting to find which aspects of our intelligence are important for our superiority, so that we could give them to the artificial beings that we create.
Naturally, in several domains, we are better than animals, particularly with our capacity to communicate using language. I will insist here on the capacity of our brain to work at two levels. For instance, when we are solving a problem, we can try to execute some of the allowed actions, but we can also temporarily go at an upper level, where we no longer execute the actions that could solve the problem, but we examine its definition. In that phase, we find which actions could be efficient, those which could be a waste of time; we can also define new rules, which will help us to solve the problem more easily.
Let us consider the following problem: a snail is climbing up a 15 meters high mast. Each day, it is climbing up 3 meters, and each night it comes back 2 meters. When will it be at the top of the mast? Many people, even if they are not very clever, will answer 15 days. Unfortunately, this is false, but they have taken an extraordinary step: to find this result, they have looked at the formulation of the problem, and they have created a new rule: each day, the snail gains one meter. If the mast was one billion meters high, using this rule would lead to a drastic improvement compared with the method where we consider what happens during one billion days. The error was made because the new rule is misused: one would have to apply it at the evening of the first day, and not at the morning. However, it is remarkable that most humans find evident to create new rules, not by experience, but simply from the formulation of the problem.
This upper level is called the meta-level. The preceding example shows that human beings easily work at this level, where one thinks about the problem before applying its rules. In many situations, it is useful to work at two levels, particularly when we are monitoring the search for the solution of a problem: we foresee what could happen, then, after an action is performed, we look whether everything takes place as foreseen.
We also have to consider two levels when we examine the behavior of other people (or of our self). At the lower level someone thinks, and at the upper level another person (or the same one) thinks about what the first individual does when he is thinking. Psychologists call “metacognition” the capacity of modeling the other people (and also oneself). For instance, it is important to know what span we are able to jump, and animals are good at that. It is also important to know that repetition is useful for memorizing a text, and only man knows that. Apes, and particularly chimpanzees, have models of other chimpanzees, and of the humans that they often meet. However, their performances are not at our level. Dogs have more limited abilities: a guide dog for the blinds is extraordinarily devoted to its master. Unfortunately, it cannot foresee that its master will be hurt when they are walking under a 1.5 meters high scaffolding: it has not the capacity to put itself into the skin of a 1.8 meters high man. Naturally, it will avoid this place in the future, but it cannot avoid the first failure.
Consciousness also uses two levels: it allows us to access a part of the processes that take place in our brain. It is helpful to understand why we have taken a wrong decision, and to share our knowledge with other people since we are able to know a part of what we are knowing. Thanks to that, the master can directly give out a lot of knowledge to his pupils, who are not restricted to try to imitate him.
Too often, the work at the meta-level is done by the researcher who creates an AI system. Therefore, this severely restricts its adaptation to unforeseen situations: everything must be anticipated by the researcher, who has also to define the response of the system when they happen. As long as AI systems do not work at the meta-level, their intelligence will be very limited. Essentially, the source of their performances is the human analysis of the possible accidents or setbacks. As a guide dog for the blinds, these artificial beings are often unable to take a good decision in unexpected situations.