Monthly Archives: April 2015

Cows are thirsty

 Several years ago, I was in the subway with one of my children. For keeping him entertained, I asked him a question: “what are the cows drinking?” Knowing me, he did not answer immediately: he sniffed out a trap, it was too easy to answer this question! A passenger, whom I did not know, was surprised by this total ignorance, and he could not refrain from saying: “They are drinking milk, my little child!” This mistake is natural: this is a hoax. It is due to a very useful mechanism, that often enables us to give a correct answer quickly. The question was on a beverage and on cows; as cows are associated with milk, and milk is a beverage, we have the answer immediately.

It is better to avoid systems giving incorrect results. However, if the result is that one gives no answer at all, this is not satisfactory. Many AI researchers have an excellent mathematical training. Therefore they overwhelmingly reject all possibilities of error: if one accepts a contradiction, one can prove everything. Luckily, human beings survive their contradictions.

Methods that can lead to erroneous results may be useful when we do not know a perfect method, or when it is too costly over time. It may be better to have a result quickly rather than waiting centuries for a perfect result. For instance, current Chess programs play very well, better than the world champion, but they certainly do not always play the best move. This was an issue for Edgar Poe, when he was asserting that Maelzel’s Chess Player was not an automaton. He was right: a human chess player was hidden into the machine. Nevertheless, one of his arguments was incorrect: he believed that it was not a machine, because sometimes it lost the game. Sorry, Edgar, a machine may be imperfect. However, we know a perfect method for playing the best chess move: one fully develops the tree of all the possible moves for each player. As the depth of the tree is finite, it can be done in a finite time, and one can find the perfect solution from this tree. Unfortunately, the time necessary for generating it goes beyond anything we can imagine.

Fast mechanisms are used by our unconscious; we call them intuition. Unfortunately, sometimes they are making mistakes. They are also useful for resolving ambiguities in natural language texts. It may be dangerous to use the results found by our intuition: we must cautiously check that they are correct when the consequences could be serious. To do that, our conscious mechanisms verify the results given by our intuition. However, calling them is not automatic: for each result, we must decide whether we will check it.

Intuition is an essential mechanism for human beings, and especially for animals: this is the only way enabling them to take decisions that are not genetically programmed. It always gives results quickly; moreover, we can use it in situations where we do not know other methods. As living beings, artificial beings will have to use similar methods. Nothing prevents us to give them such capabilities. Unfortunately, this is not so easy because we do not know them, since they are unconscious; we have to discover and experiment them in our systems. Another difficulty is that our love of correct methods will reject the bunch of heuristics that makes up intuition.

Fortunately, we have now a clearer idea of the mechanisms called ‘intuition’. For Herbert Simon: “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.” I cannot see why AI systems could not use this mechanism.

The horror of error is dangerous: some research directions are neglected because they can lead to dubious results. This favors rigorous mathematical methods in AI, even when their application is limited because they are too time-consuming.

If artificial beings were never making mistakes, that would mean that they would be too restricted. They must not be refrained from using methods that can be very helpful when carefully used. However, we must also give them the possibility to find out when they are wrong: we have to add modules verifying the results. For instance, CAIA checks that all the solutions it finds satisfy all the constraints. Unfortunately, it is possible that it misses solutions, but we have already seen that, sometimes, great mathematicians also miss some of the solutions.

We will progress when our systems believe that cows drink milk, see that this is wrong, then find the right answer, as my embarrassed fellow traveler finally did.

When a Belgium canon meets a French mathematician

 Let us consider again magic squares, I want to show that it is possible to find solutions for very large squares, using methods completely different from the improved combinatorial method used by CAIA in the preceding post.

 In 1703, canon Poignard, from Brussels, found an algorithm that generates solutions for magic squares when N is odd. This method was improved by mathematician Philippe de la Hire, who issued in 1705 a communication to the Académie Royale des Sciences. This method is well known; it was already described in Diderot’s Encyclopedie, entry ‘quarrés magiques’.

 For describing this method, I will consider N=5. Let us choose a permutation of the numbers from 1 to N, for instance: 2,3,5,1,4. We write it on the first line of the square. For the second line, we put this permutation shifted by P elements; P is a number less than N such that P and P-1 must be relatively prime numbers with N. This implies that the method cannot be used for even squares: as either P or P-1 is even, N cannot be even. If one chooses P=3, the second line will be 5,1,4,2,3. The process continues, and we get the square just down left.

In the same way, we consider a permutation of the integers from 0 to N-1, multiplied by N, for instance: 5,20,0,15,10 which is written on the first line of another square. This new square is completed as the preceding one, with a shift Q, which must also be different from P. With Q=4, we generate the square on the right above.

 Both squares have exactly once each number on each line, each column, and each diagonal; moreover, the sums on these lines are the same, since we have the same permutation. Naturally, there is no bijection, since there are only N different numbers for the N2 little squares of each square.

We create the magic square by adding these two squares. It has always all the preceding characteristics, and each value between 1 and N2 is present exactly once.

 An interesting remark: if opposite sides of the square are considered as next to one another, all the diagonals have also the common value for their sum. We do not generate only magic squares, but a kind of ‘hyper-magic’ square that has 4N constraints, including N diagonals to the right and N diagonals to the left. The new problem is much more constrained than the original one. This explains why there is no hyper-magic square for N=3, where one cannot find a couple P and Q.

This way of generating magic squares is extraordinary: we do not need to write the constraints, the way the solution is created automatically satisfies them. It is no more necessary to find the value of SUM. It is also unnecessary to backtrack.

 Without using a computer, I found a magic square for N=11 in 16 minutes. Furthermore, I wrote a 25 lines C program that generates such squares. In only 333 seconds, it generated (but not stored) a solution for N=100,001. It had to find a value for 10,000,200,001 little squares! It is unthinkable to solve such a problem with the methods used by CAIA, and many other constraint satisfaction systems: there are 200,002 constraints, each one has 100,002 unknowns, and the number of possible values for each unknown is greater than ten billion. Studying the formulation of this problem, canon Poignard transformed a constraint satisfaction problem into an algorithm building a large permutation from simpler permutations.

 CAIA’s problem solver is very far from such performances. This does not mean that it is an impossible task for an AI system, but that our systems must have a greater capacity for working at the meta level, studying and transforming the formulation of the problem: they must be able to do what canon Poignard did more than 300 years ago.