Category Archives: Inquiring minds

Hard tasks may be easier than easy tasks

What we are easily doing may be very difficult for an AI system. Conversely, it can easily perform tasks that are very difficult for human beings. The reason is that, during millions of years, evolution made us very expert for some activities such as perception, but has not adapted us to many activities that are not useful for feeding or breeding. Seeing where a soccer ball is  is not evident for an artificial player when this ball has not been specially colored so that it clearly appears in the background. Our ancestors, hunters or gatherers, needed excellent perceptual abilities for surviving; we are not even better than animals in that domain, evolution considerably increased the potential of all living beings. On the contrary, living beings are not so good for reasoning: most animals cannot make simple deductions. Among the best ones, apes are able to see that it is necessary to move a box under bananas fastened to the ceiling so that they could take them; this is one of the most difficult problems that they can solve. Man is incomparably more successful than apes for reasoning; however, we have severe limitations. Many people do not find easily whether there are more animals that are not ducks, or more animals that are not birds. We overestimate our performances in this domain because we can only compare ourselves with the animals. Moreover, we try to avoid as much as possible problems that we have great difficulties to solve.
The Spring 2012 issue of the AI magazine describes a competition for chess-playing robots. Since several years, some programs are very strong, better than the best human players: they accept to play against programs with human friendly restrictions, for instance, the program can only use a part of the endings database, which now contain all the winning positions up to seven pieces. In this competition, the difficulty was not to find a good move, but to perceive the board seen by a camera, and to play effectively this move. The system does not receive the last move via a keyboard, and it does not display its own move on a monitor. It only sees a real chess position with the usual figures of the chess pieces. Afterwards, once it has found its move, it really plays it, first removing a possible captured enemy man, then taking its own piece, and moving it to its new square.
In this experiment, the robot is not very skillful. For evaluating the competitors, points were awarded for making a move in less than 5 minutes, points were subtracted for knocking over a piece, or failing to place a piece fully within its arrival square. To keep the game short, only 10 moves were played by each side. Moreover, some participants were helped by defining an environment facilitating the perception of the scene: one robot (not the winner) was playing with blue pieces against yellow ones, on a white and green chessboard. Although the participants have realized exceptionally well designed robots, the perceptual and motor limitations of the robots, compared with the remarkable quality of chess programs, clearly show the difference of the results among  applications common to man and machine. For an artificial being, it is easier to find a winning move against the human world chess champion than to play this move on the chessboard physically!

Naturally, we must carry on with the realization of applications using our perceptual and motor abilities: such robots can be very helpful, and in some cases they can completely replace us for dangerous or boring applications. For instance, robots that can drive a car will be interesting when they will become skillful enough. However, implementing an AI system that solves a problem, which is extraordinary difficult for us, could be no more difficult to realize than building a system solving another problem that every man, and in some cases even animals, can solve easily.
Although a mathematician has recently proven that using mathematics could help finding a bride, this is an exceptional situation: it is unlikely that outstanding mathematicians have better rates of survival and reproduction than other humans. Our capacities in this domain are only a by-product of capacities that has been developed for other goals. The best mathematicians are very good in a relative scale, compared to other human beings, but they may be very low in an absolute scale, when one considers all the entities that could some day perform mathematical activities. Perhaps, they are only one-eyed among blinds, maybe very far from what could be done by artificial beings. I remember how chess players were laughing in the 1970s when they were looking at the moves played by the best programs at this time. They have completely changed their opinion on this point, even the grandmasters now use programs for preparing their matches.
The results obtained in AI show that AI research is extremely difficult for humans. For this reason, I am developing CAIA, an artificial Artificial Intelligence researcher. This is certainly a very difficult task, but evolution has not built us for being good AI researchers. Therefore, this problem is perhaps not so difficult that we could believe at first sight. When I have managed to improve CAIA so that it takes over from me for some particular task, it always performs better than myself.

Adolphe Pégoud, a model for AI researchers

We have just celebrated the centenary of two achievements of the aviator Adolphe Pégoud: the first parachute jump from a plane, and the invention of aerobatics. Curiously enough, both are strongly connected; it is interesting to understand how the first achievement led to the second one.
Parachute jumps were already made from balloons, but never from a plane. Pégoud thought that it could be useful for leaving a broken-down plane. The other pilots thought that he was crazy to try such a dangerous and pointless experiment. Moreover, as most planes had room for only the pilot, the plane would be lost after the pilot left the plane. Everybody, including Pégoud, did not think much about the future of the plane, but  they believed that it would immediately crash when the pilot would have jumped. Pégoud had chosen an old plane which was expendable. While he was coming down under his parachute, Pégoud looked at his plane, and he was very surprised by its behavior. Instead of immediately crashing, it made many curious maneuvers, for instance, flying upside down, or looping the loop, and this did not lead to a disaster: it carried on with another aerobatics. Pégoud immediately understood the interest of this experience: if the plane could do these figures without a pilot, it could do them with a pilot. Therefore, after being solidly tied up to his plane, he imitated his preceding plane, and he was the first human to fly upside down; a little later, he was looping the loop.

Pégoud realized that a plane could “discover” new maneuvers when it was left autonomous, and he was able to take advantage of this capacity. We, AI researchers, must also imitate Pégoud, leaving our systems free to make choices, to fly around, in situations where we have not given them specific directives. Then, we analyze what it has done, and we possibly find new ideas that we will include in our future systems.

Personally, I have had such an experiment, and it gave me the idea about the importance of a direction of research that I am trying to develop since I am working in AI. In 1960, I began working on a thesis in AI. My goal was to realize a program that had some of the capacities of a mathematician: it received the formulation of a theory (axioms and derivation rules), and it had to find and prove theorems in this theory. Although it could try to prove a conjecture that I gave it, it usually started its work without knowing any theorem of this theory. As Pégoud’s plane, it had no master, it was free to choose which deductions it would try: I had no idea of the directions that it would take.

One day, as I was analyzing the results found by the second version of this system, I was surprised to see that it had found a proof of a theorem different from the proof that I knew, which was given in the logic manuals. It happened that, for proving theorem TA, it did not use another theorem TB, whereas TB was essential for the usual proof; the new proof was shorter and simpler. I tried to understand the reasons behind this success; I discovered that the system, left to itself, had behaved as if it had proven and used a meta-theorem (or theorem on the theory) that allowed to find the proof without using theorem TB: the system bypassed it. After this experiment, as Pégoud, I took over the controls, and I realized a third version of my system, which systematically imitated what I had observed: the system was now able to prove meta-theorems in addition to theorems. It could study the formulation of the theory, and not only using the derivation rules with the already found theorems. This new version had much better results: it proved more theorems, the demonstrations were more elegant, and they were found more quickly.

Since that experiment “à la Pégoud”, I am convinced that an AI system must be able to work at the meta-level if we want it to be both effective and general. In doing so, it can fly over the problems, and discover shortcuts or new methods: it is easier to find the exit of a labyrinth when one is above it.

Such discoveries are possible only when we let our systems free to choose what they will do. If we want to bootstrap AI, we have to be helped by new ideas coming from observations on their behavior, while we are parachuting down after leaving them alone.

A new science: Artificial Cognition

Many scientists in Cognitive Science study the cognition of living beings, human or animal ones. However, some capacities of artificial beings are completely different from those of the living ones. Therefore, a new science, Artificial Cognition, will have the task to examine the behavior of artificial beings.

For Cognitive Science, living beings exist; we want to understand the reasons behind their behavior, and their capacities in various situations. We want to know how they memorize, remember, solve problems, take decisions, and so on. We observe them, we use medical imaging to detect what parts of a brain are active when the subject perform a task. One also devises ingenuous experiments that will show how the subject manages to solve a problem cleverly chosen. Naturally, we only study behaviors that exist for at least one kind of living beings.

The situation is different for Artificial Cognition: the goal is to build artificial beings rather than to observe them. Normally, we write computer programs, or we give knowledge to existing systems, and we try to obtain an interesting behavior. For that we usually utilize ordinary computers, but we can also build specialized machines, and this will be probably more frequently the case in the future. Living beings depend on mechanisms created by evolution, which uses mainly a remarkable element, the neuron. They may have extraordinary capacities for adaptation: we can learn to build houses, to write books, to grow plants, etc. Unfortunately, we have also limitations: we cannot increase the size of our working memory to more than about 7 elements; we can only use auxiliary memories such that a paper sheet. They are useful, but not as efficient as our internal memory. We can no more increase the possibilities of our consciousness, many mechanisms of our brain will always be hidden when we are thinking. This is a very serious restriction: consciousness is essential for learning, for monitoring our actions, for finding the reason of our mistakes, and so on.

On the contrary, in Artificial Cognition, we are not restricted to the neuron, we can build the mechanisms that we have defined. This possibility does not exist in the usual Cognitive Science: nature has built the beings that we want to study. In Artificial Cognition, we put ourselves in the place of evolution, which worked during billions of years on zillions of subjects. It succeeded in creating living beings, often remarkably adapted to their background. However, nobody is particularly well adapted to the artificial environments that man created, such as solving mathematical problems, playing chess, managing a large company, etc. As we have invented many of these activities, we have chosen them so that we can have reasonable performance in these domains, using capacities that evolution gave us for different goals such as hunting game for food. At the start, when on tries to build a new system, we are inspired by our methods, such as they have been discovered by Cognitive Science scientists. In doing so, we are using only a small part of the possibilities of Artificial Cognition, we must also utilize all the possibilities of computers, even those that we cannot have. Artificial beings will have much better performances than us when they use not only all of our methods, but also many other methods that we cannot use. We are unable to use many useful methods because we have not enough neurons, because they are not wired in the necessary way; it may be also simply because our neurons have intrinsic limitations, for instance, they do not allow to load new knowledge in our brain easily. Perhaps, there are capacities that evolution did not give us either because they were not useful for our ancestors, or because there are jumps that evolution cannot make.

The methodology and the potentiality of the usual Cognitive Science and of Artificial Cognition are very different. We are not limited to the existing beings, but it is very difficult to build new beings. However, there is a strong tie between these two sciences: building an artificial being is defining a model. If it behaves as living beings do, this model will give an excellent description of the mechanisms that Cognitive Science wants to find. On the other hand, when we want to build an artificial being, the first thing to do is always to start with the implementation of the methods that are used by living beings. Nevertheless, we have to progress from this starting point, and we will arrive perhaps some day to build artificial beings that will be able to achieve tasks extremely difficult for us. For instance, we will see artificial beings capable of building other artificial beings more effective than themselves.

AI researchers are too clever and work too much

Usually, being clever and working a lot are qualities, they are very much desirable for a researcher. This is true for most domains, but not in AI.

Indeed, an AI researcher naturally wants to have results as good as possible. Unfortunately, in the present state of AI, it is difficult to build systems that know how to modify the formulation of the problems they must solve, or to find new knowledge in order to solve them efficiently. It is easier for the AI researcher to do this work himself so that his system will have better results when it uses the improvements that he has made to the formulation of the problem, or to the specific knowledge that he has found with great efforts. For evaluating the quality of an AI system, we must not uniquely take account of its performances, but mainly of the quantity of artificial versus human intelligence that allowed these excellent results.

Let us consider chess programs. They have a remarkable level: 29 of them have an Elo rating higher than 2872, Elo of the world champion, Magnus Carlsen. The authors of these programs have made an extraordinary work, and I admire them very much. However, I admire much less their programs, which are unable to play another game than chess. Intelligence is not a Dirac delta function, extremely successful on a very small domain, and unable to have any other activity. These programs do not even know that they are playing chess: chess rules are programmed, and the system cannot examine them for understanding why it has played a weak move. Clever humans have written the combinatorial programs that generated all the winning positions when there are at most seven pieces on the board. Other clever humans have written the evaluation functions that evaluate the interest of a position. They have also written the combinatorial program developing a huge tree which is used for choosing the move that will be played during the game. Good players, often grandmasters, have worked out the data bases including the best opening sequences of moves.

If the goal is only to have a high-performance program, they are right: I would try the same approach if it was my goal. However, if the goal is to create an intelligent system, the system must be very different: it has to think of the quality of its moves, using the game rules, to find methods for selecting the best moves, to write programs taking into account its analyses, and so on. Even if its performances are not as good as those of the present programs, this system would be interesting because it would be more general and, for me, more intelligent.

Our systems have to become more autonomous, to find new methods rather than only executing a method that we have discovered. If so, they have a potential for improving themselves. For the future development of AI, it is better to devise systems which are perhaps not as good as the systems that we entirely build, but which are able to find by themselves some of the methods used in our best systems. In that way, more efficient systems than our present realizations will perhaps emerge with the passing years.

We, AI researchers, behave as the parents who are doing their children’s homework. They get excellent marks, but the essential goal is missed: the children must become able to solve problems alone.

Realizing artificial systems more intelligent than ourselves is possible

In 2006, we commemorated the 50th anniversary of AI, but I was not very happy: was it the time to celebrate? This is not evident when we compare the quiet development of our domain with the prodigious changes made during the same period in many other areas, for instance, the huge growth of power and applications of computers. The applications and the performances of AI have not improved with the same speed; most of its successes came from the increase in computer speed that enables us to use successfully combinatorial methods in domains where it would have been hopeless 50 years ago. Why is there such a slow progress?
This does not mean that nothing has been made, on the contrary, many interesting results have been found. Moreover, some useful systems have been implemented, for instance, several successful game playing programs for Chess, Checkers, Backgammon, Scrabble, etc.; a recent spectacular success was obtained for Jeopardy! For many games, the best programs are at least as good, and sometimes better, than the best human players. Many interesting theoretical results have also been proven, useful methods have been discovered. However, AI has not yet changed the lives of human beings, although we are trying to create artificial beings with a superior intelligence, quality essential in most of our activities.

One reason of the slowness of our advance is that AI is a tougher domain than it was thought 57 years ago. Realizing systems at least as intelligent as human beings is one of the most difficult tasks ever undertaken by humanity. I am convinced that systems much more intelligent than ourselves are possible; however, I am not convinced that human beings are intelligent enough to realize alone such systems.

Another reason is that the usual way of doing research is wrong for AI: it does not favor the directions of research that must be developed if we want to realize really intelligent artificial systems. Particularly, the importance given for the number of publications is excessive: this does not encourage researchers to realize the large systems that will have the knowledge enabling them to perform efficiently in many domains. It is difficult to make many publications on such systems because we waste a lot of time on practical problems writing programs for our computers, and debugging them.
It is easier to write a lot of papers on theoretical domains; therefore, papers in AI are too often mathematical papers. For instance, in the 105 pages of the August 2013 issue of the journal Artificial Intelligence, 85 theorems, lemmas, corollaries, and propositions were proven, none of them by an AI system! These papers are sometimes useful, but we must also do experiments with programs using a large amount of knowledge. While realizing and experimenting such programs, one often finds new ideas: we are always overseeing important aspects, and the computer shows our weaknesses mercilessly. We need its collaboration.

However, we must not be too pessimistic. Man needs a help for developing AI research. Then, who can help us? The only candidates are AI systems themselves. Therefore, it is essential to bootstrap AI: AI can help us to improve AI. Bootstrapping is paradoxical, how a system can help to build itself? In reality, a version of a system helps to build a better version of itself. Our civilization is the product of a bootstrap: we could not make the present computers if we had no computers! We should focus our efforts on the realization of AI systems whose main goal is to help AI researchers to devise more successful and more ambitious AI systems. This collaboration is a fruitful one since, for the present time, human beings and AI systems are not good in the same activities.

Therefore, I am working since 1985 on a system called CAIA (in French: Chercheur Artificiel en Intelligence Artificielle) whose goal is to create an Artificial Artificial Intelligence Researcher. I will explain later why I have chosen this direction, how I am progressing in this bootstrap, and what are the difficulties met with this approach.

If we do not try to overcome these difficulties, we will still be almost at the same state for the centenary of AI.