By Ruth Aylett
I was quite impressed by the today’s talk at Human Computer Interaction Instute of Carnegie Mellon. Professor Aylett presented her research projects at School of Math and Computer Science of Heriot Watt University in UK. Both the topic and her way of presentation made the talk intriguing for me, may be because of the novelty effect of the agents (/robots) on the people for the first time interaction. Below is the summary of the talk and my ideas on it:
Professor Aylett started with a broad introductory issues for the ones far from the concepts. To define empathy, she demonstrated the following statement:
“An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion. This can result in empathic relations etween a user and an agent.”
As she presented, there are several different classifications for empathy according to different psychologists. As the first categorization, we can see empathy as parallel or affective empathy. Parallel empathy means that different parties experience the same emotion witth you, whereas affective empathy corresponds to having different emotions. The latter points to trying to modify user emotion by the agents. Another view handles empathy as being cognitive or affective . Cognitive empathy is related to the theory of mind and understanding or knowing the emotional state of another. Affective empathy is ‘everyday’ version of the empathy and means experiencing an emotion as a result of the emotion of another. The last one she elaborated on is the mediating empathy. It can be developed via the situation (like when seeing the poor situation of someone, starting to crying) or via emotional expression (like if someone hugs you, you feel warm-hearted).
If we tackle the empthic agents via outcomes, we can categorize them as cognitive or affective, one of the classifications I mentioned above. Cognitive outcome is related to situation in which the character is put story, drama ability to model affect, as part of action-selection. Affective outcome can be associated to contextually appropriate expressive behavior like face, gesture, posture, and voice.
We can classify empathic agents from the point of the party of which affective state changes. Therefore, there are two types of agents. In the first type, agent changes the affective state of the user. The user experiences the empathic emotions and the agent has persuasive impact on the user, it stimulates user’s empathy (relational empathy type). Second type of agents, known as first order empathic agent, develop empathy themselves towards users and other characters. Here, the change in the agent’s affective state is observed. As the professor says, this type of agents is quite difficult to be designed and developed.
After this broad introduction, the professor went on with theories lying behind the design of empathic agents. Again two classes of theories : 1) theories of affect, 2) theories of personality which effects the actions and expressive behaviors, respectively. She mentioned about the theories they mostly use, which are Ekman’s primitive emotions (a classification of the emotions like happiness, anger, sadness, disgust etc), Russell’s classification (Circumplex model of effect Russell, 1980), and Cognitive Appraisal Theory (for generating appropriate behavior). In addition, they also utilize from Facial Action Coding System (FACS). Codes are the definitions of muscle groups on the face that perform a visible expressive action. By using 44 of these predefined codes, they coded expressions from video recordings of interactions.
Later, she moved on to one of the sample projects, EMOTE, they have worked. EMOTE is an empathic robot tutor that can respond to the emotional states of a child empathically and form a socio-emotional band with a child. They used Russell quadrant for representing the emotions. They also used context for deciding emotions, because simply gestures cannot represent behavior on their own. For example, smiling has very different versions like hopeless smile, angry smile and even in some culture ashamed smile. Therefore, smiling does not always necessarily mean that a person is happy.
They conducted experiments to see the performance of their agent EMOTE. The way they conducted their experiments was that they invoked empathy of the participants. There were 1000 participants in the study 520 of which were UK and 422 of which were German children. They stimulated empathy of the participants by making them angered and sad by bullying the victim. Their aim in this study was to change people’s attitude against bullying, it was a kind of tutorial. They made a longitudinal study lasted for 6 weeks in order to defeat the novelty effect of the agents, an effect people show the signs by being amazed when they first faced with robots. They divided the sample into two: 455 particiants in the intervention group, 487 in the control group. This study increased the probability of escaping from victimization by pupils and decreased the probability of victimization only among UK children.
Since I do not have deep knowledge about agents and human computer interaction, I do not have too much to comment about the talk, the novelties and limitations of their approach. However, one thing attracted my attention most during the professors’ talk. As she claimed, human to human interaction is quite rich, and we are not able to do that with agents. Hence, as the human beings we look too rich and complicated from the aspect of HCI scientists too, and we should appreciate our differences, respect them and value each other by considering this without judging each other without enough empathy.