All posts by admin

The future of AI is the Good Old Fashioned Artificial Intelligence

AI researchers have various goals: many are mainly interested in studying some aspects of intelligence, and want to build rigorous systems, often based on a sophisticated mathematical analysisOne characteristic of this approach is to divide AI in many sub-domains such as belief revision, collective decision making, temporal reasoningplanning, and so on. However, other researchers want to model human behaviorwhile I belong to a third category, which only  wants to create systems solving as many problems as possible in the most efficient way.

At the beginning of AI, the supporters of the last approach were the majority but, with the passing years, they have become such a minority that many do not understand the interest of this approach, which they judge unrealistic, and even non scientific. Some present AI researchers look with condescension at those who are still working on these ideas, and they are speaking of the Good Old Fashioned Artificial Intelligence. Very funny, the acronym is almost Goofy! However, one can be arrogant when one has obtained excellent results, which is certainly not the case: AI has not yet changed the life of human beings. It is better to think that there are several approaches, and that all of them must be developed as long as one of them has not succeeded in making a significant breakthrough.

In my approach, we experiment very large systems, using a lot of knowledge. It is very difficult to foresee what results such a system will obtain: its behavior is unpredictable. Usually, one has unpleasant surprises, we have not at all the excellent results that we expected. Therefore, we have to understand why it goes wrong, and to correct the initial knowledge. During this analysis, we may find new concepts that will enable us to improve our methods. Finally, almost nothing of the first system is still present after this succession of modifications. For this reason, a paper where a researcher presents what he wants to do at the start of his research is not very interesting, the final system will be too different from the initial one. The interest of the first inspiration is that it is necessary for starting the process of improving this succession of systems. Only the last version is really useful. We have to start in a promising direction, and to describe only what we have built at the end.

This method has many drawbacks for the scientifically correct approach. First, we cannot publish many papers: we must wait to have a system that has interesting results, and that may require several years. Moreover, it is almost impossible to describe a very large system with enough precision, so that another researcher could reproduce it. Naturally, it is always possible to take the program, and check that it gets the same results but, if so, one does not really understand how it works. For being convinced of the interest of a system, a scientist wants to create it again. Unfortunately, that requires a lot of time since these systems use a lot of knowledge. Moreover, they are so complex that it is impossible to give a complete description, too many minor details are important for the success of the new system. For instance, CAIA includes more than 10,000 rules, and a twenty pages paper may be necessary for explaining only fifty of them. I could remake Laurière’s ALICE because I could question him about important choices which he had not the place to include into the 200 pages of his thesis.

We can understand than many researchers reluctantly look for an approach that has not a beautiful mathematical rigor. Unfortunately, it is not evident that mathematics are appropriate for AI, rigor is often too costly in computer time. If a system using a perfectly rigorous method can solve a problem, that is the best solution. For instance, that is the case for Chess endings with at most seven pieces. However, it does not seem that it is always possible, theoretical results prove that, for some problems, the computer time necessary for the fastest solution, increases dramatically with the size of this problem.

For the most complex problems, we must trade perfection against speed, and realize systems that solve many, but not all, problems in a reasonable time. It seems that such systems have to use a huge amount of knowledge. As they are very large, it is impossible to be sure that they never make mistakes. However, it is better to have a system that correctly solves many problems, and makes a few mistakes, than a system that fails to solve most problems because they would require too must time. After all, human beings often make mistakes; this does not prevent us to make sometimes good decisions.

AI researchers are too clever and work too much

Usually, being clever and working a lot are qualities, they are very much desirable for a researcher. This is true for most domains, but not in AI.

Indeed, an AI researcher naturally wants to have results as good as possible. Unfortunately, in the present state of AI, it is difficult to build systems that know how to modify the formulation of the problems they must solve, or to find new knowledge in order to solve them efficiently. It is easier for the AI researcher to do this work himself so that his system will have better results when it uses the improvements that he has made to the formulation of the problem, or to the specific knowledge that he has found with great efforts. For evaluating the quality of an AI system, we must not uniquely take account of its performances, but mainly of the quantity of artificial versus human intelligence that allowed these excellent results.

Let us consider chess programs. They have a remarkable level: 29 of them have an Elo rating higher than 2872, Elo of the world champion, Magnus Carlsen. The authors of these programs have made an extraordinary work, and I admire them very much. However, I admire much less their programs, which are unable to play another game than chess. Intelligence is not a Dirac delta function, extremely successful on a very small domain, and unable to have any other activity. These programs do not even know that they are playing chess: chess rules are programmed, and the system cannot examine them for understanding why it has played a weak move. Clever humans have written the combinatorial programs that generated all the winning positions when there are at most seven pieces on the board. Other clever humans have written the evaluation functions that evaluate the interest of a position. They have also written the combinatorial program developing a huge tree which is used for choosing the move that will be played during the game. Good players, often grandmasters, have worked out the data bases including the best opening sequences of moves.

If the goal is only to have a high-performance program, they are right: I would try the same approach if it was my goal. However, if the goal is to create an intelligent system, the system must be very different: it has to think of the quality of its moves, using the game rules, to find methods for selecting the best moves, to write programs taking into account its analyses, and so on. Even if its performances are not as good as those of the present programs, this system would be interesting because it would be more general and, for me, more intelligent.

Our systems have to become more autonomous, to find new methods rather than only executing a method that we have discovered. If so, they have a potential for improving themselves. For the future development of AI, it is better to devise systems which are perhaps not as good as the systems that we entirely build, but which are able to find by themselves some of the methods used in our best systems. In that way, more efficient systems than our present realizations will perhaps emerge with the passing years.

We, AI researchers, behave as the parents who are doing their children’s homework. They get excellent marks, but the essential goal is missed: the children must become able to solve problems alone.

Realizing artificial systems more intelligent than ourselves is possible

In 2006, we commemorated the 50th anniversary of AI, but I was not very happy: was it the time to celebrate? This is not evident when we compare the quiet development of our domain with the prodigious changes made during the same period in many other areas, for instance, the huge growth of power and applications of computers. The applications and the performances of AI have not improved with the same speed; most of its successes came from the increase in computer speed that enables us to use successfully combinatorial methods in domains where it would have been hopeless 50 years ago. Why is there such a slow progress?
This does not mean that nothing has been made, on the contrary, many interesting results have been found. Moreover, some useful systems have been implemented, for instance, several successful game playing programs for Chess, Checkers, Backgammon, Scrabble, etc.; a recent spectacular success was obtained for Jeopardy! For many games, the best programs are at least as good, and sometimes better, than the best human players. Many interesting theoretical results have also been proven, useful methods have been discovered. However, AI has not yet changed the lives of human beings, although we are trying to create artificial beings with a superior intelligence, quality essential in most of our activities.

One reason of the slowness of our advance is that AI is a tougher domain than it was thought 57 years ago. Realizing systems at least as intelligent as human beings is one of the most difficult tasks ever undertaken by humanity. I am convinced that systems much more intelligent than ourselves are possible; however, I am not convinced that human beings are intelligent enough to realize alone such systems.

Another reason is that the usual way of doing research is wrong for AI: it does not favor the directions of research that must be developed if we want to realize really intelligent artificial systems. Particularly, the importance given for the number of publications is excessive: this does not encourage researchers to realize the large systems that will have the knowledge enabling them to perform efficiently in many domains. It is difficult to make many publications on such systems because we waste a lot of time on practical problems writing programs for our computers, and debugging them.
It is easier to write a lot of papers on theoretical domains; therefore, papers in AI are too often mathematical papers. For instance, in the 105 pages of the August 2013 issue of the journal Artificial Intelligence, 85 theorems, lemmas, corollaries, and propositions were proven, none of them by an AI system! These papers are sometimes useful, but we must also do experiments with programs using a large amount of knowledge. While realizing and experimenting such programs, one often finds new ideas: we are always overseeing important aspects, and the computer shows our weaknesses mercilessly. We need its collaboration.

However, we must not be too pessimistic. Man needs a help for developing AI research. Then, who can help us? The only candidates are AI systems themselves. Therefore, it is essential to bootstrap AI: AI can help us to improve AI. Bootstrapping is paradoxical, how a system can help to build itself? In reality, a version of a system helps to build a better version of itself. Our civilization is the product of a bootstrap: we could not make the present computers if we had no computers! We should focus our efforts on the realization of AI systems whose main goal is to help AI researchers to devise more successful and more ambitious AI systems. This collaboration is a fruitful one since, for the present time, human beings and AI systems are not good in the same activities.

Therefore, I am working since 1985 on a system called CAIA (in French: Chercheur Artificiel en Intelligence Artificielle) whose goal is to create an Artificial Artificial Intelligence Researcher. I will explain later why I have chosen this direction, how I am progressing in this bootstrap, and what are the difficulties met with this approach.

If we do not try to overcome these difficulties, we will still be almost at the same state for the centenary of AI.

Who am I?

My name is Jacques Pitrat. In 1960, I began a thesis in Artificial Intelligence (AI) and, since that date, I never stopped realizing AI systems. From 1967 to 2000, I was a researcher at the Centre National de la Recherche Scientifique (CNRS). After my retirement, I carried on with devoting all of my time to my research.

I taught AI at the Université Pierre et Marie Curie (Paris 6) from 1967 to 1998. I was the thesis advisor of 70 theses; all of them were in the AI domain.

I have written six books about AI:
Un programme de démonstration de théorèmes. Monographies d’informatique de l’AFCET. Dunod. 1970.
Textes, ordinateurs et compréhension. Eyrolles. 1985. Translated in English:
An artificial approach to understanding natural language. North Oxford
Academic (Grande-Bretagne) and GP Publishing (USA) 1988.
Métaconnaissance, Futur de l’Intelligence Artificielle. Hermès. 1990.
Penser autrement l’informatique. Hermès. 1993.
De la machine à l’intelligence. Hermès. 1995.
Artificial Beings. The conscience of a conscious machine. ISTE and Wiley. 2009.

I am a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and of the European Coordinating Committee for Artificial Intelligence (ECCAI). I received the IPMU special award “Fifty years of Artificial Intelligence”.