Monthly Archives: November 2015

Superintelligence

 Superintelligence is the title of a book written by Nick Bostrom. This book is enjoyable to read, the author is well aware of what has been done in AI. For instance, he considers bootstrap as one of the possible ways for creating superintelligence, he mentions the crossover point where the intelligence of the improving system will become greater than human intelligence. Moreover, quoting Yudkowski, he notices that our intelligence is in a very narrow range, in spite of “the human tendency to think of “village idiot” and “Einstein” as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds in general.”

I completely agree with the author on the possibility of a superintelligence. However, I do not see the future as he does; in fact, I do not see the future at all. While reading this book, I had the feeling that the author had never realized any AI system; from his biography, it seems that it is true. He presents the future of AI in a very loose and undefined way: he does not give specific information about how to do it. I am very well placed to know that it is not enough to say that one must bootstrap AI, it needs to be indicated how to do this effectively, and it is not that easy. I have plenty of ideas for advancing CAIA, but I have not a clue about their final consequences.

One cannot fully appreciate what will become a particular AI system, especially if one has never realized such a system. In its final stage, it bears little resemblance with its initial version. We must be humble about predicting future consequences of a good idea.

In order to understand better the difficulty of predicting the future of a field of research, just look at computer science. I began working in this domain in 1958; at that time, people thought that it was excellent for computing trajectories or for issuing payrolls (incidentally, my first program solved an integral equation). In these years, I had never read a paper foreseeing computer omnipresence in the future world, where everybody uses several computers, even in the phones, the cars and the washing machines, and where a thing such as the web could exist. In the same way, we have no idea of AI future application: we are not smart enough, and AI is still in a rudimentary state. This does not allow us to have a reasonable idea of superintelligence.

Many are concerned that machines will size power over man because we assume that they will behave as ourselves. However, our intelligence was built for our hunter-gatherer ancestors by evolution, which promotes procreation and search for food. Sizing power is an important step for these activities, no wonder that it is widely displayed among living beings, including aliens if they exist, who are improved by evolution. The goals of super-intelligent beings are not necessarily the same as ours; therefore, the search for power is not always essential for their activity. Perhaps, there will be artificial super-mathematicians, quite happy to develop complicated theories; to do that, there is no need to rule the world and to enslave us.

In his book, Bostrom considers various methods for not being dominated by artificial beings, such as boxing methods that confine the system, or tripwires detecting signs of dangerous activity. However, if we confine the system, it will never become super-intelligent; furthermore, if it is super-intelligent, it will detect and bypass our tripwires. In the March 2015 issue of the AI Journal, Ernest Davis analyses this book, and suggests that the issue will be settled by “an off switch that it cannot block.” This is a variant of the idea of unplugging the machine from the mains; Science Fiction has long shown that it does not work. Even for AI specialists, it is difficult to imagine the immense superiority of superintelligence!

Let’s go back to the general scale of minds, where animals are well below us, even those rather intelligent such as dogs. Therefore, if we are able to realize a super-intelligent system, the difference between ourselves and this system could be as large as the difference between a dog and ourselves. For understanding our position vis-à-vis super-intelligent systems, we can get inspired by the position of a dog vis-à-vis ourselves. Considering again the efficiency of tripwires, might a dog invent an off switch that we could not bypass? How could a dog imagine many of our activities, such as writing, mathematics, astronomy, AI, and so on? No doubt that we also have no idea about many essential activities, as well as the ancient Greeks had no idea about Computer Science. Long into the future, super-intelligent systems will possibly discover new applications; we will be able to use them because it is much easier to use than to find.

Who knows, perhaps some day our descendants will harmoniously live with super-intelligent systems, and they will be very satisfied with this. When we see how the world is ruled, we can hope better for future generations. This may happen if super-intelligent beings do not want to control the world, but to help us. As we have said before, this is possible if superintelligence is not created by a kind of evolutionary process, based on the pursuit of power and control.

Anyway, we are very far away from this distant future. I am not sure that our intelligence is sufficient for allowing us to create superintelligence; however, if we succeed, the result will certainly be very different from all our predictions. Only when we have made further progress, we will be able to predict the consequences of super-intelligent beings accurately. In the meantime, we are playing at frightening ourselves.