The future of AI is the Good Old Fashioned Artificial Intelligence

AI researchers have various goals: many are mainly interested in studying some aspects of intelligence, and want to build rigorous systems, often based on a sophisticated mathematical analysisOne characteristic of this approach is to divide AI in many sub-domains such as belief revision, collective decision making, temporal reasoningplanning, and so on. However, other researchers want to model human behaviorwhile I belong to a third category, which only  wants to create systems solving as many problems as possible in the most efficient way.

At the beginning of AI, the supporters of the last approach were the majority but, with the passing years, they have become such a minority that many do not understand the interest of this approach, which they judge unrealistic, and even non scientific. Some present AI researchers look with condescension at those who are still working on these ideas, and they are speaking of the Good Old Fashioned Artificial Intelligence. Very funny, the acronym is almost Goofy! However, one can be arrogant when one has obtained excellent results, which is certainly not the case: AI has not yet changed the life of human beings. It is better to think that there are several approaches, and that all of them must be developed as long as one of them has not succeeded in making a significant breakthrough.

In my approach, we experiment very large systems, using a lot of knowledge. It is very difficult to foresee what results such a system will obtain: its behavior is unpredictable. Usually, one has unpleasant surprises, we have not at all the excellent results that we expected. Therefore, we have to understand why it goes wrong, and to correct the initial knowledge. During this analysis, we may find new concepts that will enable us to improve our methods. Finally, almost nothing of the first system is still present after this succession of modifications. For this reason, a paper where a researcher presents what he wants to do at the start of his research is not very interesting, the final system will be too different from the initial one. The interest of the first inspiration is that it is necessary for starting the process of improving this succession of systems. Only the last version is really useful. We have to start in a promising direction, and to describe only what we have built at the end.

This method has many drawbacks for the scientifically correct approach. First, we cannot publish many papers: we must wait to have a system that has interesting results, and that may require several years. Moreover, it is almost impossible to describe a very large system with enough precision, so that another researcher could reproduce it. Naturally, it is always possible to take the program, and check that it gets the same results but, if so, one does not really understand how it works. For being convinced of the interest of a system, a scientist wants to create it again. Unfortunately, that requires a lot of time since these systems use a lot of knowledge. Moreover, they are so complex that it is impossible to give a complete description, too many minor details are important for the success of the new system. For instance, CAIA includes more than 10,000 rules, and a twenty pages paper may be necessary for explaining only fifty of them. I could remake Laurière’s ALICE because I could question him about important choices which he had not the place to include into the 200 pages of his thesis.

We can understand than many researchers reluctantly look for an approach that has not a beautiful mathematical rigor. Unfortunately, it is not evident that mathematics are appropriate for AI, rigor is often too costly in computer time. If a system using a perfectly rigorous method can solve a problem, that is the best solution. For instance, that is the case for Chess endings with at most seven pieces. However, it does not seem that it is always possible, theoretical results prove that, for some problems, the computer time necessary for the fastest solution, increases dramatically with the size of this problem.

For the most complex problems, we must trade perfection against speed, and realize systems that solve many, but not all, problems in a reasonable time. It seems that such systems have to use a huge amount of knowledge. As they are very large, it is impossible to be sure that they never make mistakes. However, it is better to have a system that correctly solves many problems, and makes a few mistakes, than a system that fails to solve most problems because they would require too must time. After all, human beings often make mistakes; this does not prevent us to make sometimes good decisions.

4 thoughts on “The future of AI is the Good Old Fashioned Artificial Intelligence

  1. Your approach is very interesting, and publishing your CAIA system (as free software) would be quite logical. As you said, from a scientific point of view, understanding the day-to-day evolution of CAIA is relevant. You nicely gave me privately some few snapshots of CAIA but that is not enough to understand its evolution in details.

Leave a Reply to admin Cancel reply

Your email address will not be published. Required fields are marked *