Monthly Archives: October 2015

Poor Artificial Intelligence: little known and little liked

 I have the sad honor of working in AI, a domain that is frowned upon by many remarkable people. We are dealing with two main categories of opponents. I have already spoken of the first ones, such as Stephen Hawking, who think that AI is potentially very dangerous: the emergence of super-intelligent artificial beings will lead to the extinction of the human species. On the contrary, for others it is impossible to create artificial beings more intelligent than ourselves. A recent interview shows that Gérard Berry belongs to this last category.

Gérard Berry is a computer scientist who has made outstanding works, deservedly recognized by the scientific community: he is a member of the Académie des Sciences, professor at the Collège de France, and he has received the CNRS gold medal, France’s most prestigious scientific distinction, the second time it honored a computer scientist. For me, AI is not a part of computer science, but both are very close: computer is an essential tool for implementing our ideas. Therefore, when a top-level computer scientist criticizes AI, this must be taken very seriously.

AI occupies only a small part of this interview, but he is not mincing his words in these few sentences. Here are two excerpts:

“I was never disappointed by Artificial Intelligence because I did not even believe for a second in Artificial Intelligence. Never.”

“Fundamentally, man and computer are the most different opposites that exist. Man is slow, not particularly rigorous, and highly intuitive. Computer is super-fast, very rigorous, and an absolute ass-hole.”

Firstly, I will consider three points mentioned in this interview, where I agree with him:

Intuition is certainly an essential characteristic of human intelligence. In a recent blog, I have looked at how AI systems could also have this capacity.

Chess programs are a splendid realization, but their intelligence is mainly the intelligence of their developers. They used the combinatorial aspects of this game, enormous for human beings, but manageable by fast computers. It would have been much more interesting to speak of another outstanding IBM achievement: Watson, an intuitive system, won again the best human players for the game Jeopardy!

For the present time, AI byproducts are the most useful results of our discipline: they allowed to discover important concepts in computer science.

Let us talk about our disagreements. Up to now, nobody has proven that man could have an intellectual activity that no artificial being could ever have. As long as we are in this situation, we must assume that all our activities can be mechanized. Then, we are in a win-win situation. If we show that we are wrong, a significant progress would be made by a reduction ad absurdum argument. On the contrary, if we are right, we will create very helpful artificial beings, which will not be restrained by our intellectual limits.

The main argument that appears in this interview is that computers are absolutely stupid. I am way back in 1960! At that time, many people already wanted to show the impossibility of intelligent artificial beings. Their argument was: computers only do what they are told to do. Naturally, this is true, but this proves nothing: the problem is to write programs, and to gather data, so that the computer will behave intelligently. One can develop programs that do other things than running as fast as possible on their data. Programs can analyze the description of a problem, and then write and execute efficient programs well adapted to a particular problem and its data. Moreover, in a bootstrap, the existing system works with its author for creating an improvement of the system itself. Computer users strongly believe in the usefulness of bootstrap: without computers, it would have been impossible to design the current computers! Hawking’s extraordinary intuition had seen the efficiency of bootstrapping AI; this is why he was afraid of its future.

If one has never proven that man can do something that no computer could ever do, many things can be done by a computer while no human being could ever do them. For instance, our reflexive consciousness is very limited: most of the processes in our brain are unconscious. On the contrary, it is possible to realize a computer system that can observe any of its processes if it wants to; moreover, it can have access to all of its data. As consciousness is an essential part of intelligence, this will have tremendous consequences. Unfortunately, we are not yet able to make full use of this capacity because man’s consciousness is too limited for being a useful model.

AI is probably the only scientific domain with so many staunch opponents, although they do not know it. This is not surprising: man has always rejected what would remove him from the center of the world. We all know the difficulties encountered by those who said that the earth revolved around the sun, or that apes were our cousins. Naturally, the idea that artificial beings could be more intelligent than ourselves is particularly unpleasant for the most intelligent among us. Less intelligent people are used to live with beings more intelligent than themselves, but geniuses are not. Of course, some of them are strongly against AI.

I believe in the possible existence of artificial beings much more intelligent than ourselves. On the other hand, I am not sure that we are intelligent enough to achieve this goal. We need help, and for this reason, since many years, I am trying to bootstrap AI: as of now, my colleague CAIA, an Artificial Artificial Intelligence Researcher, gives me a substantial assistance for its own realization. AI is probably the most difficult task ever undertook by mankind: it is no wonder that progress is very slow.

55 years of Artificial Intelligence

It was in October 1960 that I started to work on my thesis. One year ago, I was appointed deputy chief of a computer department, which existed since almost ten years, in a military laboratory. It had an impressive documentation on computers, and it was a very supportive environment: the same year, the head of the department started working on automatic translation from Russian to French. I was thrilled by the first AI realizations, such as Newell and Simon’s Logic Theorist, Samuel‘s machine learning using the game of checkers, Gelertner‘s geometry-theorem proving machine, and so on.

For my thesis, I wanted to implement a general theorem prover that received as data any of the many propositional logic axiomatizations. It had to discover interesting theorems in each theory, without any information on its existing theorems.

The first difficulty to overcome was to find a thesis director: at that time, Artificial Intelligence, and even computer science, were not taught at Paris University. Luckily, Professor Jean Ville was interested in computers, although he essentially worked on probability, statistics, and economics. He was very kind to accept that I registered at the University for this thesis.

Looking at the results of the initial version of my program, I was surprised to see that it had discovered proofs different from those given in logic books. These original proofs showed me that it could be interesting to use meta-theorems, that is new ways for proving theorems. Therefore, I gave my program the possibility to prove meta-theorems, and the modified program found more results, and also proofs that were easier to understand. The results of this program were not bad; for a particular axiomatization, it proved results for which Lukasiewicz said: “One must be very expert in performing such proofs.” Results found for one of these axiomatizations can be found at page 134 of Laurière’s book (page 125 for the French version).

I was greatly impressed by these results: since then, I have always tried to realize systems that have the ability to work at the meta-level. This is a challenging task, since their results are compared with systems where the work at the meta-level has been made by their author, and not by the system itself. For the present time, the performances of a system working at the meta-level are not better than those found by other programs, but human intelligence is less important, they have a larger degree of autonomy. The primacy of men over animals mainly comes from our capacity to work at a meta-level, consciousness is one example of this superiority. I cannot believe that it is possible to create a superintelligence without this ability.

Moreover, this ability allows to bootstrap AI: existing AI systems will help us to implement more powerful systems. I believe that AI is the most difficult science, perhaps far too complex for human intelligence. The ideal would be to have an Artificial Artificial Intelligence Researcher; this is CAIA’s long-term goal.

Since 30 years, I am developing CAIA. At the moment, it has 13,000 rules that transform themselves into 500,000 lines of C; I have not written a single line of the present programs, and many rules have also be written by CAIA. I continue to replace the expertises created by myself, by meta-expertises that create the preceding expertises. The bootstrap will be completed when CAIA includes a set of meta-expertises that could generate all its expertises and meta-expertises, including themselves.

I am trying to create more and more declarative, and more and more general knowledge: I prefer to give CAIA meta-knowledge for creating expertise E rather than writing myself expertise E. It is difficult but, when I succeed, CAIA’s version is usually better than my initial version.

There is still a very long way before this bootstrap is completed. I have not 55 more years for completing it, but I hope that other researchers will pursue this tremendously difficult task.