Skip to content →

SocialWear (BMBF, 2020-2024)

Building on a unique set of competencies of various DFKI groups we aim to develop a new generation of smart fashion that combines sophisticated artificial intelligence with sophisticated design. To achieve this, we will have to rethink the entire classic process of developing both garments and the associated wearable electronics: allowing fashion and electronics design criteria as well as implementation processes to be seamlessly integrated. We will develop signal processing and learning methods that will enable such smart garments to understand and react to complex social settings, and design new interaction paradigms to enhance and mediate social interaction in new subtle, rich ways. In doing so we will consider a broad spectrum along the lines of the size of the social group and the transition between implicit and explicit interaction.

ACG is creating models of social context and social interaction at different levels of interaction while taking into account personal differences in communication and interaction. Using the concept of embodied emotions, we will build models to interpret data from, e. g., sensors and cameras integrated into objects of everyday life like garments and bicycles. The multisensory data will be thus translated into high-level concepts in line with the principles of the theory.  Similarly, multisensory implicit vs. explicit feedback will be conceptualised to support interaction in different contexts. These contexts will include affective feedback in urban mobility to assist cyclist swarms, combining foreign with sign language acquisition to leverage potential embodied learning effects, and using embodied techniques to teach sign language with AR agents. We will enhance our hybrid models of social-emotional behaviour, which combine theory-driven modeling of user emotion and emotion regulation strategies with wearable data-driven recognition of sequences of social signals (social behaviour), and integrate models of implicit v.s. explicit feedback mechanisms. Empirical studies will be run to inform and test the models for different situations (use cases). ACG is also working on the generation of training data for the construction of machine learning models dedicated to the recognition of sign language from videos and its production through VR Agents. High awareness of ELSI principles accompanies all phases of the project.