We have just celebrated the centenary of two achievements of the aviator Adolphe Pégoud: the first parachute jump from a plane, and the invention of aerobatics. Curiously enough, both are strongly connected; it is interesting to understand how the first achievement led to the second one.
Parachute jumps were already made from balloons, but never from a plane. Pégoud thought that it could be useful for leaving a broken-down plane. The other pilots thought that he was crazy to try such a dangerous and pointless experiment. Moreover, as most planes had room for only the pilot, the plane would be lost after the pilot left the plane. Everybody, including Pégoud, did not think much about the future of the plane, but they believed that it would immediately crash when the pilot would have jumped. Pégoud had chosen an old plane which was expendable. While he was coming down under his parachute, Pégoud looked at his plane, and he was very surprised by its behavior. Instead of immediately crashing, it made many curious maneuvers, for instance, flying upside down, or looping the loop, and this did not lead to a disaster: it carried on with another aerobatics. Pégoud immediately understood the interest of this experience: if the plane could do these figures without a pilot, it could do them with a pilot. Therefore, after being solidly tied up to his plane, he imitated his preceding plane, and he was the first human to fly upside down; a little later, he was looping the loop.
Pégoud realized that a plane could “discover” new maneuvers when it was left autonomous, and he was able to take advantage of this capacity. We, AI researchers, must also imitate Pégoud, leaving our systems free to make choices, to fly around, in situations where we have not given them specific directives. Then, we analyze what it has done, and we possibly find new ideas that we will include in our future systems.
Personally, I have had such an experiment, and it gave me the idea about the importance of a direction of research that I am trying to develop since I am working in AI. In 1960, I began working on a thesis in AI. My goal was to realize a program that had some of the capacities of a mathematician: it received the formulation of a theory (axioms and derivation rules), and it had to find and prove theorems in this theory. Although it could try to prove a conjecture that I gave it, it usually started its work without knowing any theorem of this theory. As Pégoud’s plane, it had no master, it was free to choose which deductions it would try: I had no idea of the directions that it would take.
One day, as I was analyzing the results found by the second version of this system, I was surprised to see that it had found a proof of a theorem different from the proof that I knew, which was given in the logic manuals. It happened that, for proving theorem TA, it did not use another theorem TB, whereas TB was essential for the usual proof; the new proof was shorter and simpler. I tried to understand the reasons behind this success; I discovered that the system, left to itself, had behaved as if it had proven and used a meta-theorem (or theorem on the theory) that allowed to find the proof without using theorem TB: the system bypassed it. After this experiment, as Pégoud, I took over the controls, and I realized a third version of my system, which systematically imitated what I had observed: the system was now able to prove meta-theorems in addition to theorems. It could study the formulation of the theory, and not only using the derivation rules with the already found theorems. This new version had much better results: it proved more theorems, the demonstrations were more elegant, and they were found more quickly.
Since that experiment “à la Pégoud”, I am convinced that an AI system must be able to work at the meta-level if we want it to be both effective and general. In doing so, it can fly over the problems, and discover shortcuts or new methods: it is easier to find the exit of a labyrinth when one is above it.
Such discoveries are possible only when we let our systems free to choose what they will do. If we want to bootstrap AI, we have to be helped by new ideas coming from observations on their behavior, while we are parachuting down after leaving them alone.
I really love that blog article! Very interesting, and somehow funny.