BIGEKO, BMBF, 2023-2026. (Sign Language Recognition Model for bidirectional translation of sign language and text including emotional information). The contributions of the ACG are threefold: (1) Identification of shown (communicated) emotions in sign language and the corresponding modalities (face, head, body, …), (2) Identification and realization of a machine learning approach for recognition of shown emotions in sign language together with sign language recognition, and (3) Design and realization of a Sign Language Recognition Model (SLRM) for bidirectional translation of text into sign language.
ComCross, 2023 (Context and Emotion Regulation Information for Improving User Affect Modeling in Crosswalk Situations). During driving, difficult social situations, such as interactions with pedestrians, may affect the driver’s state. To model the driver’s affect in a difficult situation, valence and arousal can be simulated based on social signals that may be communicated by the driver. However, predicting the driver’s affective state based on current visual-only recognition approaches seems to be at least discouraging. One reason for this is that affect has not only communicative components that are reflected in social signals and physiological parameters, but also internal components that reflect the individual’s internal experience. Another reason is that most, if not all, adult emotions are almost always internally regulated, especially emotions related to self-evaluation.
UBIDENZ, BMBF, 2021-2024 (Ubiquitäre Digitale Empathische Therapieassistenz). The first six months after clinical treatment of depressive patients are critical for the likelihood of relapse and suicide. However, an appropriate treatment is often missing during this time. In UBIDENZ, we develop a socio-empathetic virtual assistant for depressive patients who are coming out of clinical treatment. The focus of this intervention is to fulfil the patients inherent desire for bonding and — next to ongoing pharmacological and psychotherapeutic interventions — to create an empathetic and understanding therapeutic relationship with a virtual assistent in daily life. The assistant thus supports the patients in continuing to heal their depressive symptoms.
MITHOS, BMBF, Hightech Strategie 2025, 2021-2024 (Interactive systems in virtual and real spaces – Innovative technologies for a digital society). Teachers feel inadequately prepared to handle complex socio-emotional challenges and conflicts. In MITHOS, we develop an immersive Mixed-Reality-Training for long-term acquisition of social skills in classrooms. Teachers can prepare for the challenges of diverse individual socio-emotional and cultural differences in class. We use a novel combination of VR and sensor technologies to simulate interactive virtual agents, social signals and natural implicit social feedback, and couple virtual and real experiences.
EASIER, EU, Horizon 2020, 2021-2023 (Intelligent Automatic Sign Language Translation), aims to create a framework for barrier-free communication among deaf and hearing citizens across Europe by enabling users of European sign languages to use their preferred language to interact with hearing individuals. EASIER will provide translation between spoken languages and sign languages, both in near-realtime (automatic) and non-realtime (human-in-the-loop) mode, and will pursue affect-and gender-informed language technologies.
EmmI, BMWI, 2020-2023. In addition to technical realization, a key prerequisite for the successful market launch of automated driving is technology acceptance on the side of potential users as the basis for an increased willingness to buy.
Two decisive factors for technology acceptance are trust in the safety of the system and a perceivable added value. In EMMI, these two central building blocks will be addressed through the systematic design, development and evaluation of an empathic human-machine interface.
SocialWear, BMBF, 2020-2024. Within Wearable Computing there has traditionally been a strong focus on using garments as platforms for on-body sensing. The functionality of such systems is defined by sensing and computing. The garment is a simple container for sophisticated digital intelligence, but it doesn’t close the gap between the function and the real user needs. In parallel work, the high tech fashion community has focused on design aspects with the digital function often being fairly simplistic with little intelligent processing behind it. In other words, in the traditional high tech fashion approach, the digital part is a simple “add-on” to sophisticated design. SocialWear combines both to enhance social garments and support social experience and interaction.
AVASAG, BMBF, 2020-2023. AVASAG researches the next-generation of signing avatars. The project focuses on two essential aspects for automatic sign animation for 3D avatars: 1. correctly and comprehensive input annotation – i.e., text input – as required for training data for machine learning (ML), 2. definition of an extended intermediate representation of signs, multi-modal signal streams, or MMS. They include imprecise hand shapes, hand positions, and positioning, pauses, dynamics and facial expressions, detailed body and facial animation, their transitions, emotional expressions, and context information that goes beyond the traditional gloss annotation.
MindBot, EU, Horizon 2020, 2020-2023. In the context of automated work, workers are often not challenged enough cognitively due to repetitive tasks. At the same time, high focus and precision are mandatory. This combination can lead to emotional challenges and mental illness. Cobots (one arm robots) are increasingly used to take off load from workers as they can be efficient, flexible, precise, safe, and able. However, in their current form, they don’t offer socio-emotional support essential for collaborative tasks. MindBot enhances Cobots with a face and a body that can interact with the worker on real collaborative tasks to offer socio-emotional support in addition to cognitive offloading at workplace.
DEEP, DFG, 2018-2022, investigates the unique combination of a computational model of social signal interpretation and a computational model of emotions. The combination covers the differentiation between internal and external emotions, possible elicitors, suitable emotion regulation strategies, and related sequences of social signals. In DEEP, these concepts are integrated into a ToM of emotions with related mental states and strategies. As a result, the approach allows a real-time disambiguation of emotion elicitors and the recognition of emotion regulations processes through related social signals, and finally a simulation of possible related emotions of different types.
DigiSORKC, Saarland University, 2021-2022. In cognitive behavior therapy, one of the most important diagnostic and therapeutical instruments is the behavior analysis. In DigiSORKC, we will digitalize the behavior analysis with an interactive social agent aiming to offer help-on-demand for patients in difficult situations.
EmmA, BMBF, 2018-2021, examines the use of a socio-emotional assistance system to improve mental health due to possible stress at work and in the work environment. The innovation of EmmA is the development of an assistance system that can take over psychological and social tasks and assist people as a personal advisor. The new mobile platform for the virtual agents and sensors enables as assistance that is always available for the user.
EmpaT (in German), BMBF, 2015-2018. Job interviews might be perceived as a difficult situation. In the EmpaT project, we investigate how an interactive job training avatar can be employed to practice such situations. Therefore, we uniquely combine a multi dimensional real-time affect model with a real-time social signal interpretation framework in a 3d learning world. The goal is to reduce stress in job interview situations for both the job applicants and the job interviewer.
TARDIS , EU, FP7-ICT, 2011-2014. In TARDIS, a scenario-based serious-game simulation platform is created through which young people at risk of exclusion can practice repeatedly and improve their social skills. They will interact with virtual agents, acting as recruiters in job interview scenarios designed to deliver realistic socio-emotional interaction. Such agents constitute credible yet tireless interlocutors, allowing young people to explore and improve their interaction skills without the risk of real-life failure.
INTAKT, BMBF, 2008-2011. In INTAKT, a general-purpose technology is explored, allowing information systems to expand to an emotionally intelligent, visual contact. The “Ambient Intelligence” is upgraded to “Ambient Personality” to gain a greater acceptance and usability of electronic systems and adding a human touch.
SemProM, BMBF, 2008-2011. SemProM explores how products keep a diary: smart labels give products a memory and support intelligent logistics. Within the IKT-2020 research program of the German Federal Ministry of Education and Research, the Innovation Alliance Digital Product Memory (DPM) is developing key technologies for the Internet of Things in the cooperative project SemProM.
IDEAS4Games, EU ProFIT EFRE, 2007. IDEAS4Games investigates how current AI research outcomes can be used to improve dialog-based computer games with Virtual Characters. We rely on real-time computational models of affect and expressive speech synthesis modules and new methods of creating and maintaining interaction and dialog.
CoHibit, in-house, 2005-2007. This research project investigates the simulation of communication concepts and strategies by virtual humans that “live” in an edutainment exhibit (see DFKI Newsletter, page 7).
VirtualHuman, BMBF, 2003-2006. This research project investigates concepts and techniques for virtual human characters with human-like communication skills. Virtual humans serve as dialog partners for real humans.
Crosstalk, in-house, 2002-2004. This research project aims to create an interactive infotainment installation for public spaces using multiple Lifelike Characters..
SAFIRA, EU FP5-IST, 2000-2002. This research project aims to develop a software framework that supports the design of real-time affective interactive applications with Lifelike Characters.
PRESENCE, in-house, 1999-2001. This research project aims to investigate the use of affective models to control the general behavior of a Lifelike Character dialogue system concerning creating a more believable human-machine communication.