Monthly Archives: December 2014

Not developing an advanced artificial intelligence could spell the end of the human race

We, AI researchers, have often met with a strong opposition coming from scientists working in other domains. Some believe that it is an impossible task, while others think that it will have nefarious consequences when we meet our goals. Stephen Hawking is clearly in the second class of opponents when he said: Developing an advanced artificial intelligence could spell the end of the human race.

I entirely agree with him on two points. Firstly, human intelligence is rather low measured on an absolute scale. As he said: “Humans are limited by slow biological evolution. The low level of human intelligence is essential for understanding the development of AI, which is simultaneously the easiest and the most difficult science:

The easiest, because building artificial systems with a limited intelligence, as ours, is not very difficult.

The most difficult, because it is with our limited intelligence that we have to fulfill this task.

For this reason, I am working since thirty years on bootstrapping AI: what has already been done helps us to improve the present system. This is the second point where I entirely agree with Hawking: It would take off on its own, and re-design itself at an ever increasing rate. However, danger is not inevitable.

In the past, several first-class scientists have been afraid of the possible consequences of new techniques. For instance :

Dionysius Lardner, Professor of Astronomy at University College, London, wrote in 1830: Rail travel at high speeds is not possible because passengers, unable to breathe, would die of asphyxia.

En 1836, Astronomer François Arago described the danger coming from railway tunnels, worrying about the consequences of sudden temperature changes on the passengers, and the possibility of an explosion of the boiler.

Astronomer Simon Newcomb did not believe in flying machines. In 1901, he explained that, if a man could ever fly, he could not stop: “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.

Astronomers seem to have a gift for finding dangers everywhere. A pity they do not look in their backyard. In a preceding post, Beware of the aliens, I explained that it had been very dangerous to send a message with Pioneer 10: either it was useless, or it could spell the end of the human race. On the other hand, AI may be dangerous, but it is certainly not useless.

Hawking is right when he insists upon the possibility of a danger: we must be careful. He is also right when he draws our attention on the difficulty to supervise systems more intelligent than ourselves. However, we can manage to use such a system: a well-conceived AI system explains the reason of its findings. Therefore, we can understand, and evaluate results that we are unable to discover.

It is precisely because our intelligence is limited that it is necessary to develop AI systems that are more intelligent than ourselves. The world is more and more complex, and even the most intelligent humans are overwhelmed by tasks such as leading a nation, or conducting research in AI. Once more, I completely agree with Hawkins when he wrote: In a world that is in chaos politically, socially and environmentally, how can the human race sustain another 100 years? As many tasks exceed the capacities of the cleverest humans, the answer may be: with the help of advanced AI.

It is surprising to see how we are little or poorly reacting to essential problems for the future of humanity such as the global warming, and the galloping population growth. If we are not able to overcome these problems, human race will disappear; unfortunately, we do not appear to take at the right time the drastic decisions which are necessary. A very clever AI system would be welcome.

In Artificial Beings, I insist upon the importance to give AI systems a conscience. It is impossible to foresee everything, but one can limit the risks. Zero risk does not exist, but one may accept a very low risk, which should help avoid a likely and serious risk.

It is too bad that papers raising the scarecrow of the end of the world may lead to hinder necessary researches for avoiding it.