Since emotional competence is an important factor in human communication, it will certainly also improve communication between humans and robots or other machines. In our everyday life many services formerly provided by humans are now fulfilled by machines. Prominent examples are ticket machines or call centers. Although they use different types of interaction, in both cases it would be appreciated if the machine could recognize emotions of its user and react adequately. However, the aspects emotion recognition and adequate reaction are only part of emotional competence. If we imagine even more sophisticated machines like service robots that should for instance distribute meals in hospitals or support elderly persons, such machines should also be able handle their own (artificial) emotions, since this could create some kind of relatedness and increase acceptance of their human users.
This directly leads to two further aspects of emotional competence: emotion representation and emotion regulation, i.e. adequate handling of own emotions in different situations. Emotional competence is defined also by the aspects emotion recognition, emotion representation, emotion regulation and emotional behavior. In order to show emotional competence in man-machine cooperation all four aspects have to be considered as it is realized by the robot head MEXI presented in this report.
This report introduces implicit communication via the recognition and display of emotions as a new level in humanmachine communication. The approach is based on a special fuzzy model of emotion. Emotion recognition exploits facial expressions as visual cues (VISBER) and modulation of human voices as prosodic cues (PROSBER). The VISBER module uses a six-stage procedure beginning with image pre-processing and ending with fuzzy emotion classification. The PROSBER module consists of two very similar subsystems. The first is used to train a fuzzy model; the second uses the trained model to recognize emotions. The training process is based on a so-called fuzzy-grid approach. To interact with a human user in a holistic way the machine also needs to display its own emotions.
Mexi, the robot head developed in C-LAB, was used for this purpose. In this report we present how all four aspects of emotional competence are intergrated into the MEXI's architecture. MEXI is able to recognize emotions from facial expressions and prosody of natural speech and represents its internal state made up of emotions and drives by according facial expressions, head movements and speech utterances. For its emotions and drives internal and external regulation mechanisms are realized by emotion engine. Furthermore, this internal state and its perceptions, including the emotions recognized at its human counterpart, are used by MEXI to control its actions. Thereby MEXI can react adequately in an emotional communication.
|Natalia Esau (Universität Paderborn)
|Vol. 9 (2010) No. 02
Link to Report