Her

A recent film, Her, raises an interesting question: what is an individual in Artificial Cognition? For living beings, an individual is one cognitive system, which is alone in a part of the world, his body. There are a few exceptions, such as the Siamese twins and the fetus, but usually this is clear: an individual is not scattered with bits at different places, and there is only one cognitive system inside an individual, which has a well-defined boundary.

The hero of Her uses his smart phone for communicating with an artificial being. They are speaking, and it can observe the hero with the camera. This artificial being is not an Operating System, or OS, such as it is called in the film. The OS is essential for managing the programs running in a computer, but it does not act upon the details of the various applications. In fact, the artificial being is a program, with its data, which can be considered as an individual. This program is not necessarily present in the smart phone, some subroutines may be in a distant computer, such as Siri which is running on Apple servers. We have no longer the continuity of a living being, but that does not matter: an artificial being perfectly works although its different parts are linked by a phone network. Being entirely in the phone could be interesting, the system works even when the network fails, and it is easier for preventing unwanted intrusions; however, the “ears”, “eyes”, and “mouth” of our artificial being are in the smartphone, while most of its “brain” is probably in a remote computer, where other artificial beings are simultaneously present.

For artificial beings, parts of an individual may be at different places, and parts of several individuals may be at the same place. This application, which allows to communicate with an artificial being, is not used only by the hero, other people are using it simultaneously, 8316 according to a dialog. An Operating System knows very well how to manage many executions of the same program, each one with a different user. For each user, the program uses the data linked to this user, which must contain in this case:

* A memory of the preceding dialogs, and of the situations seen by the camera.

* The model of its interlocutor: is he emotional, selfish, cultivated, etc. Does he like to speak of sports, literature, politics, and so on. For improving this model, it may ask questions, for instance: what is your relationship with your mother?

* The image that the artificial being wants to give his interlocutor. Is it in love with him, or is it despising him, or hating him, etc. Is it intelligent, funny, rational, and so on. These characteristics are invented according to the goal of the artificial being: it behaves as if it really has them. For instance, it has to choose a first name, and it takes Samantha. This choice is not insignificant: a man does not always consider in the same way a woman called Mary, and a woman called Gwendoline.

With these three sets of data, it is possible for an artificial being to have a coherent conversation. Naturally, it must also have general goals such as to make the interlocutor happy (these goals are not clearly indicated in the film), and methods for achieving its goals.

 

If I had to realize such a system, I believe that it would be difficult, but possible. A tough task is to interpret the images taken by the camera: it is really difficult to perceive the feelings of a man from his picture, but this is not essential in the film. I would write a program P which would be perceptive about people, and find satisfactory answers, using the private sets of data of the present user; in that way, its behavior could be adapted to everybody. I do not see why it would be useful for this program to know that other clones are simultaneously running.

I would also write a super-program M, the master, which would observe what each clone of the program is doing. M would use this information for learning to improve the efficiency of program P; this could lead M to modify this program. It could also monitor each clone, possibly stopping it, or modifying its goals if something goes wrong. Nevertheless, it is not necessary that program P knows that program M exists. To resume, there are many individuals, which are clones of P, and one individual which is the master M.

This is not the way the system, actor of this film, is built: the hero is devastated when he learns that Samantha has many other interlocutors, and is in love with 641 of them. There is a confusion between P and M: the hero, who was speaking with a clone, Samantha, is now talking with the master M, which would be the only one to have the information that many others P are running. Naturally, it is always possible to program anything, but it is not the natural way to implement such a system. Samantha could honestly tell the hero that it loves him, and no one else.

Unfortunately, that would remove the event that revives the film. The director preferred to add some drama to this excellent film rather than to have a clear and satisfactory situation for the hero, as well as for AI researchers!

Leave a Reply

Your email address will not be published. Required fields are marked *