Monthly Archives: November 2016

Everything but the essential

 Artificial Intelligence and Life in 2030 is the title of a very interesting report that has just been published by a group of prominent researchers in AI. One of their goals is that «the report can help AI researchers, as well as their institutions and funders, to set priorities».

Th ereport considers a large variety of domains; it shows that AI could be very helpful in situations where it is difficult to find an adequate staff. This is especially the case for health care. In particular automated assistance for the clinicians, image interpretation, robotics, elder care, etc. are very promising directions of research in this domain. Self-driving cars and home robots will change our day-to-day life. With intelligent tutoring systems, the students will get help adapted to their needs. This report provides an overview of many activities where AI systems will be able to help us in the next fifteen years.

It seems that the authors have begun to explore what the future needs will be; then, for each one, they have carefully examined what AI techniques could be useful. For that purpose, they did a wonderful job: if we make the required effort, many applications will be in widespread use throughout the world in 2030.

Curiously enough, the authors have completely forgotten a domain with really high needs, which they should be familiar with: to help the development of AI research!

It is very difficult to conduct AI research, especially if we just don’t imitate human behavior, where we improve our own results thanks to powerful computers. This vision of intelligence is anthropomorphic: other forms of intelligence exist. Unfortunately, building a super-intelligent system is perhaps too complex for human intelligence with all its limits: we have been shaped up by evolution only for solving the problems of our hunter-gatherer ancestors.

Serendipity allows us to produce acceptable results in new domains, such as computer science, mathematics, management, etc. However, they are necessarily limited: evolution lacked time so that we can adapt to their specific requirements. Our geniuses are probably not as good as they think, in the kingdom of the blind, the one-eyed is king. We are awfully handicapped by our working memory, with only 7 elements, and by our reflective consciousness, which ignores almost all the events that occur in our brain. Therefore, we will not get far without AI help. If we want to increase more and more AI performances, we must call upon AI assistance.

 

This has been completely ignored by the report, where the word “bootstrap” does not even occur. However, this is a technique well adapted to the resolution of very difficult problems; moreover, developing AI is an intelligent activity, therefore it depends on AI. The authors only say: «No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.». This is almost true for the existing state of AI, but it is imprudent to predict what will happen in the next 15 years: progress is very slow in the beginning of a bootstrap then, suddenly, things move quickly. Naturally, if this kind of research has no priority, this is a self-fulfilling prophecy: we will be at the same situation in 15 years. Nevertheless, if this kind of research receives a small part from the funds rightly allocated to the applications described in the report, the results will likely surprise us.

 

The importance of bootstrapping AI is seen since its beginning: in 1959, Allen Newell and Herbert Simon considered the possibility for their General Problem Solver to find the differences necessary for GPS itself. The bootstrap has two major drawbacks: it is slow, and hard to achieve. However, many achievements of our civilization come from a bootstrap: we could not design our present computers if they did not exist.

 

Few AI researchers are presently interested in this approach. I suspect that many AI researchers even think it is impossible, while most of the others believe it is much too difficult to embark on this path, taking into account the current state of development of our science. Stephen Hawking is really a genius: although he is not an AI researcher, he has seen the strength of such an approach. It is unfortunate that he wrongly concluded on the dangers of AI.

What is in this report is excellent, I am only critical of what is missing, which is essential for the future of AI. In the end, it lacks coherence: the authors rightly say that AI will significantly help the resolution of many important problems in many domains. However, they do not think to use AI for the most important task: to advance AI, which is central to the success of all the other tasks considered in this report!