Monthly Archives: February 2016

Should we disclose everything? Part I

 When a human or an artificial being has solved a problem, must he indicate all that he has done? Obviously, one must not describe everything, too much information would be worthless. However, is it sufficient to indicate only what is enough to check that a solution is correct? For my part, I don’t think so; we will see some aspects of this problem in the following posts. Here, we will only show that it is possible and useful to explain why one has chosen the steps taken when solving a problem, even those that are not necessary to justify the solution.

Usually, people tell nothing about this kind of knowledge, often because the solver does not know it: these mechanisms are largely unconscious. A chess player knows what moves he has considered, but he rarely knows why he has considered them. A pity, because de Groot’s work has shown that the strength of the strongest chess players derives from their ability to consider the best moves. In an experiment, world champion Alekhine had to think aloud while he was finding the best move in a complex position; he considered only four moves, among them the two best ones. Unfortunately, he did not say why he thought to consider them, probably because he did not know it: looking at a chess board automatically gave him the moves to consider. On the contrary, a good player had considered nine moves for the same position, but the best one was not among them.

Likewise, mathematics teachers rarely indicate why the steps of a proof have been chosen, and also why ways that seem promising do not lead toward the solution. This is unfortunate, this cause some misgivings about the method used by mathematicians: it is normal to wander, the best mathematicians have not directly found the results that have made them famous. Some students do not even try to find the solution of a problem because they believe that they are too incompetent: indeed, it is impossible to find only the good steps, in the way the teacher presents the proof of a theorem. What is important is to try, to fail, and to try again.

I was most impressed by the description, given by Martin Gardner, of the stages that have resulted in the resolution of the squaring the square problem by four maths students. The goal was to find a square covered with squares, the sizes of the small squares being all different. When we are looking for a solution, it is easy to check whether it is correct; this does not indicate how this square has been found. In fact, the students began to find a rectangle covered by small squares, all different. They enjoyed it a lot, and they built a rectangular box containing all these wooden squares. One of them, who lived at his parents, let it on his desk, and his mother was not entitled to clean this desk. Naturally, she could not resist cleaning it and, in doing so, the puzzle fell to the ground. Her disobedience being obvious, she tried to put the squares back into the box, and she succeeded, hoping to escape her son’s reproaches. However, her son discovered that his mother has found a solution different from the preceding one. Immediately, he called his friends, and they tried to understand why this puzzle had several solutions. That gave them the idea of an analogy with Kirchoff’s laws, the size of a square corresponding to the value of a resistance in an electrical circuit. They realized that the circuit must have some characteristics, and that led them to the discovery of the first squared square.

Knowing why some rules have or have not been executed would also be useful for CAIA: it could be used to improve its meta-knowledge used for selecting, prohibiting, or ordering the possible actions. In its explanation of a solution, as CAIA can completely observe any of its actions, it knows the reasons for the elimination of a result, for its choice, for the priority allocated to each action. Unfortunately, if it can know how meta-knowledge was used, it is not so easy to improve it: one must define meta-meta-knowledge, which is always very difficult to create and to use.

When an AI system uses an algorithm that tries everything possible, there is no need to explain the choice of its steps. However, CAIA dynamically determines whether it will keep or eliminate a result, or whether it will use it immediately or save it for difficult times. It could produce an interesting explanation of its actions such as: it did not keep this new constraint because it contained many unknowns, or because it was an inequality and it already had too many of them, or it delayed using this rule because its execution is time-consuming, and so on.

For an explanation to a human being, it is possible to include some indications on the reasons why CAIA has chosen to try an action; the difficulty is to model this human for giving him only what he needs to know. On the other hand, such explanations given by CAIA to itself would allow it to improve its own meta-knowledge. Both changes could be made; unfortunately, as they are difficult to implement, so far they are only in my to-do list.