Research

Our research aims at the investigation of Social Human-Computer-Interaction. Therefore, our interdisciplinary team applies and combines research standards from the areas of computer science and psychology. We rely on advanced psychological theories of cognition and affect as well as methods and techniques in the area of artificial intelligence. Based on these, we build computational models and applications used in various projects.

Our research focuses on the modeling of affect and of interactive social agents.

 

Modeling of Affect

We understand that a reasonable model of affect should be universal in its approach. Therefore our vision integrates long-term personality, medium-term mood and short term emotions to provide a wholesale simulation of affect.
One major challenge is a consistent simulation of interconnections between those affective phenomena in order to achieve a human-like affect management. Besides this, it has to be investigated how affect emerges in humans in general.
Following this philosophy, we go past the mere use of superficial indicators like facial expression recognition and postulate an underlying model of affect that takes into consideration situational and relationship-related factors while integrating a cognitive approach for appraising communicative acts for the elicitation of affect. This not only provides a more complete and universally adaptable model of affect but also it increases the believability and naturalness of simulations based on it.

The real-time computational model ALMA is explicitly designed to be used as a major control mechanism (like affect is for human beings) for virtual characters that influence behavior on various layers, e.g. the body layer (gestures and posture) or the deliberative layer (cognitive processes, e.g. decision making, selection of communicative strategies, etc.).

 

Interactive Social Agents

Our second research focus is the Interaction with Social Characters. In order to create realistic and empathic virtual agents, we firstly have to understand how processes of empathy work in human beings, secondly, how humans perceive interactive social agents while seeing them in an interaction with other virtual agents or even themselves, and thirdly, which factors can be variated in order to make the interaction between humans and empathic virtual agents more realistic and natural.

With the aim to easily model and subsequently investigate communicative styles and the appearance of a Virtual Character’s behavior, we rely on the Visual SceneMaker. It provides even non-computer experts with the ability to create complex compelling dialog and interaction behavior, as well as it supports the modeling of autonomous behavioral aspects.  The realization of SceneMaker authoring and execution platform is a joint work between DFKI and the Lab of Human Centered Multimedia.