By postdoctoral scholar Jussi Jokinen
Emotional intelligence is the ability to observe, evaluate, predict, and explain emotions in oneself and others. Emotionally intelligent people are able to use the ability in controlling their own emotion processes as well as understand others’ emotions when interacting with them. In order to interact fluently in a social environment, we are required to have emotional intelligence. This is true for humans, but it is also true for artificial agents, such as robots and autonomous systems, which interact with people. Given the current trend towards increasing integration of artificial interactive agents into our everyday life, it is worrying how little research on robotics and autonomous systems deals with emotional intelligence.
“Robots are essentially decent”, says a robopsychologist in one of Asimov’s short stories, when asked about the main difference between humans and robots. While the criticism of human (in)decency here is itself an interesting debate, it is quite clear that Asimov was wrong at least in the other part of the sentiment. Robots are not essentially decent.
When it comes to mores or ethics, robots are nothing essentially. It is up to the designers of the robots to either imprint them with ethical rules — as Asimov famously does — or design an agent that is able to appreciate the social context of its actions. Moral standards are highly context-sensitive and in addition to cultural dependence, their interpretation varies between daily situations. The ability to grasp this dynamicism is highly intertwined with the ability to make sense of the emotion process — that is, emotional intelligence.
Famous psychologist Alan Baddeley has noted that of the three areas of interest in psychology (knowledge, will, and emotion), knowledge seems to have received the greatest attention of researchers. Looking at research on machine learning and robotics, one can make a similar conclusion. Of course, as surely as there is a rich research tradition in psychology of emotion, there are such areas of research as affective computing and affective robotics.
Nevertheless, the observation holds: emotion seems to be an afterthought or a side pursuit in the grand project aiming at the creation of human-level intelligence and beyond. But you cannot just slap emotion to an agent as an afterthought.
That is, unless robots were “essentially decent”.
The key to making decent robots, that is, integrating emotion and artificial agents, is in emotional intelligence. This requires an architecture of intelligence, where the notion of other minds is foundational. Human minds have evolved to be very good at simulating other minds, and such simulations facilitate a lot of our daily interactions with each other.
I can say “Good morning” to my colleagues, because I know that they understand me and that their mental response to my greeting is according to what I intended and predicted. In addition to having such a theory of other minds, an emotionally intelligent agent needs to have a theory of emotion. We humans are also really good at that.
It is enough to observe someone dropping their phone into a puddle to be able to infer their probable emotional response (surprise, distress, frustration, perhaps embarrassment). It is a known curse in science that what is seemingly easy and — well, natural — for nature, is often extremely difficult to model in a scientific way. But constructing a model of emotional intelligence is an enterprise that we should task ourselves with, because our future interactions with evidently very intelligent agents depends on that.
Jussi Jokinen has a PhD in cognitive science. In his research, funded by the Academy of Finland, he uses computational cognitive models to predict users’ thinking and behaviour with different technologies. In addition to models of learning and motor movements, his research deals with integrating computational models of emotion to computational models of cognition.