Don, Daksitha Senel Withanage; Müller, Philipp; Nunnari, Fabrizio; André, Elisabeth; Gebhard, Patrick ReNeLiB: Real-Time Neural Listening Behavior Generation for Socially Interactive Agents Proceedings Article In: Proceedings of the 25th International Conference on Multimodal Interaction, pp. 507–516, Association for Computing Machinery, Paris, France, 2023, ISBN: 9798400700552. Nunnari, Fabrizio; Rios, Annette; Reichel, Uwe; Bhuvaneshwara, Chirag; Filntisis, Panagiotis; Maragos, Petros; Burkhardt, Felix; Eyben, Florian; Schuller, Björn; Ebling, Sarah Multimodal Recognition of Valence, Arousal and Dominance via Late-Fusion of Text, Audio and Facial Expressions Proceedings Article In: ESANN 2023 proceesdings, pp. 571–576, Ciaco - i6doc.com, Bruges (Belgium) and online, 2023, ISBN: 978-2-87587-088-9. Nunnari, Fabrizio; Mishra, Shailesh; Gebhard, Patrick Augmenting Glosses with Geometrical Inflection Parameters for the Animation of Sign Language Avatars Proceedings Article In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 1-5, 2023. Nunnari, Fabrizio; Avramidis, Eleftherios; Yadav, Vemburaj; Pagani, Alain; Hamidullah, Yasser; Mollanorozy, Sepideh; España-Bonet, Cristina; Woop, Emil; Gebhard, Patrick Towards Incorporating 3D Space-Awareness Into an Augmented Reality Sign Language Interpreter Proceedings Article In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 1-5, 2023. Nunnari, Fabrizio; Ameli, Mina; Mishra, Shailesh Automatic Alignment Between Sign Language Videos And Motion Capture Data: A Motion Energy-Based Approach Proceedings Article In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 1-5, 2023. Schneeberger, Tanja; Reinwarth, Anna Lea; Wensky, Robin; Anglet, Manuel Silvio; Gebhard, Patrick; Wessler, Janet Fast Friends: Generating Interpersonal Closeness between Humans and Socially Interactive Agents Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, 2023. Schneeberger, Tanja; Hladký, Mirella; Thurner, Ann-Kristin; Volkert, Jana; Heimerl, Alexander; Baur, Tobias; André, Elisabeth; Gebhard, Patrick The Deep Method: Towards Computational Modeling of the Social Emotion Shame driven by Theory, Introspection, and Social Signals Journal Article In: IEEE Transactions on Affective Computing, 2023. Schneeberger, Tanja; Reinwarth, Anna Lea; Wensky, Robin; Anglet, Manuel Silvio; Gebhard, Patrick; Wessler, Janet Fast Friends: Generating Interpersonal Closeness between Humans and Socially Interactive Agents Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, 2023. Schneeberger, Tanja; Hladký, Mirella; Thurner, Ann-Kristin; Volkert, Jana; Heimerl, Alexander; Baur, Tobias; André, Elisabeth; Gebhard, Patrick The Deep Method: Towards Computational Modeling of the Social Emotion Shame driven by Theory, Introspection, and Social Signals Journal Article In: IEEE Transactions on Affective Computing, 2023. da Silva, Claudio Alves; Hilpert, Bernhard; Bhuvaneshwara, Chirag; Gebhard, Patrick; Nunnari, Fabrizio; Tsovaltzi, Dimitra Visual Similarity for Socially Interactive Agents That Support Self-Awareness Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, Würzburg, Germany, 2023, ISBN: 9781450399944. Beyrodt, Sebastian; Nicora, Matteo Lavit; Nunnari, Fabrizio; Chehayeb, Lara; Prajod, Pooja; Schneeberger, Tanja; André, Elisabeth; Malosio, Matteo; Gebhard, Patrick; Tsovaltzi, Dimitra Socially Interactive Agents as Cobot Avatars: Developing a Model to Support Flow Experiences and Weil-Being in the Workplace Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, Würzburg, Germany, 2023, ISBN: 9781450399944. Dauer, Louisa; Chehayeb, Lara; Ameli, Mina; Anglet, Manuel; Bhuvaneshwara, Chirag; Schaffer, Stefan; Zahn, Esther; Tsovaltzi, Dimitra Piloting vibration induction for synchrony in urban cycling Journal Article In: 2023. Nunnari, Fabrizio; Nicora, Matteo Lavit; Prajod, Pooja; Beyrodt, Sebastian; Chehayeb, Lara; Andre, Elisabeth; Gebhard, Patrick; Malosio, Matteo; Tsovaltzi, Dimitra Understanding and mapping pleasure, arousal and dominance social signals to robot-avatar behavior Proceedings Article In: 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), pp. 1-8, IEEE, Cambridge, MA, USA, 2023. Bernhard, Lucas; Nunnari, Fabrizio; Unger, Amelie; Bauerdiek, Judith; Dold, Christian; Hauck, Marcel; Stricker, Alexander; Baur, Tobias; Heimerl, Alexander; André, Elisabeth; Reinecker, Melissa; España-Bonet, Cristina; Hamidullah, Yasser; Busemann, Stephan; Gebhard, Patrick; Jäger, Corinna; Wecker, Sonja; Kossel, Yvonne; Müller, Henrik; Waldow, Kristoffer; Fuhrmann, Arnulph; Misiak, Martin; Wallach, Dieter Towards Automated Sign Language Production: A Pipeline for Creating Inclusive Virtual Humans Proceedings Article In: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, pp. 260–268, Association for Computing Machinery, Corfu, Greece, 2022, ISBN: 9781450396318. Nunnari, Fabrizio A software toolkit for pre-processing sign language video streams Proceedings Article In: Seventh International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Marseiile, France, 2022. Deshpande, Neha; Nunnari, Fabrizio; Avramidis, Eleftherios Fine-tuning of convolutional neural networks for the recognition of facial expressions in sign language video samples Proceedings Article In: Seventh International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Marseiile, France, 2022. Nunnari, Fabrizio; Heloir, Alexis Rating Vs. Paired Comparison for the Judgment of Dominance on First Impressions Journal Article In: IEEE Transactions on Affective Computing, vol. 13, no. 1, pp. 367-378, 2022. Gebhard, Patrick; Tsovaltzi, Dimitra; Schneeberger, Tanja; Nunnari, Fabrizio Serious Games with SIAs Book Chapter In: Lugrin, Birgit; Pelachaud, Catherine; Traum, David (Ed.): The Handbook on Socially Interactive Agents: 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics. Volume 2: Interactivity, Platforms, Application, pp. 527–546, Association for Computing Machinery, 2022, ISBN: 9781450398961. BibTeX | Links: Wessler, Janet; Schneeberger, Tanja; Christidis, Leon; Gebhard, Patrick Virtual Backlash: Nonverbal Expression of Dominance Leads to Less Liking of Dominant Female versus Male Agents Proceedings Article In: Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, 2022, (*Best Paper Award*). BibTeX | Links: Heimerl, Alexander; Mertes, Silvan; Schneeberger, Tanja; Baur, Tobias; Liu, Ailin; Becker, Linda; Rohleder, Nicolas; Gebhard, Patrick; André, Elisabeth Generating Personalized Behavioral Feedback for a Virtual Job Interview Training System Through Adversarial Learning Proceedings Article In: Proceedings of the 23rd International Conference on Artificial Intelligence in Education, pp. 679–684, Springer, 2022. BibTeX | Links: 2023
@inproceedings{10.1145/3577190.3614133,
title = {ReNeLiB: Real-Time Neural Listening Behavior Generation for Socially Interactive Agents},
author = {Daksitha Senel Withanage Don and Philipp Müller and Fabrizio Nunnari and Elisabeth André and Patrick Gebhard},
url = {https://doi.org/10.1145/3577190.3614133},
doi = {10.1145/3577190.3614133},
isbn = {9798400700552},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 25th International Conference on Multimodal Interaction},
pages = {507–516},
publisher = {Association for Computing Machinery},
address = {Paris, France},
series = {ICMI '23},
abstract = {Flexible and natural nonverbal reactions to human behavior remain a challenge for socially interactive agents (SIAs) that are predominantly animated using hand-crafted rules. While recently proposed machine learning based approaches to conversational behavior generation are a promising way to address this challenge, they have not yet been employed in SIAs. The primary reason for this is the lack of a software toolkit integrating such approaches with SIA frameworks that conforms to the challenging real-time requirements of human-agent interaction scenarios. In our work, we for the first time present such a toolkit consisting of three main components: (1) real-time feature extraction capturing multi-modal social cues from the user; (2) behavior generation based on a recent state-of-the-art neural network approach; (3) visualization of the generated behavior supporting both FLAME-based and Apple ARKit-based interactive agents. We comprehensively evaluate the real-time performance of the whole framework and its components. In addition, we introduce pre-trained behavioral generation models derived from psychotherapy sessions for domain-specific listening behaviors. Our software toolkit, pivotal for deploying and assessing SIAs’ listening behavior in real-time, is publicly available. Resources, including code, behavioural multi-modal features extracted from therapeutic interactions, are hosted at https://daksitha.github.io/ReNeLib},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{nunnari23ESANN-MultimodalVADfusion,
title = {Multimodal Recognition of Valence, Arousal and Dominance via Late-Fusion of Text, Audio and Facial Expressions},
author = {Fabrizio Nunnari and Annette Rios and Uwe Reichel and Chirag Bhuvaneshwara and Panagiotis Filntisis and Petros Maragos and Felix Burkhardt and Florian Eyben and Björn Schuller and Sarah Ebling},
url = {https://www.esann.org/sites/default/files/proceedings/2023/ES2023-128.pdf},
doi = {10.14428/esann/2023.ES2023-128},
isbn = {978-2-87587-088-9},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
booktitle = {ESANN 2023 proceesdings},
pages = {571–576},
publisher = {Ciaco - i6doc.com},
address = {Bruges (Belgium) and online},
abstract = {We present an approach for the prediction of valence, arousal, and dominance of people communicating via text/audio/video streams for a translation from and to sign languages. The approach consists of the fusion of the output of three CNN-based models dedicated to the analysis of text, audio, and facial expressions. Our experiments show that any combination of two or three modalities increases prediction performance for valence and arousal.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{nunnari23SLTAT-InflectionParameters,
title = {Augmenting Glosses with Geometrical Inflection Parameters for the Animation of Sign Language Avatars},
author = {Fabrizio Nunnari and Shailesh Mishra and Patrick Gebhard},
doi = {10.1109/ICASSPW59220.2023.10193227},
year = {2023},
date = {2023-01-01},
booktitle = {2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
pages = {1-5},
abstract = {We present a new machine-readable symbolic representation of sign language based on the pairing of glosses with parameters that can be used for the inflection of motion captured sign animation clips. With respect to existing representations, this approach detaches from a purely linguistic point of view and provides a solution to the problem from a lower-level of abstraction, aiming at generic body-motion manipulation. Early experiments show the effectiveness in manipulating hand trajectories and their potential in modulating the expressivity and communicative emotion of pre-recorded signs.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{nunnari23SLTAT-ARSLinterpreter,
title = {Towards Incorporating 3D Space-Awareness Into an Augmented Reality Sign Language Interpreter},
author = {Fabrizio Nunnari and Eleftherios Avramidis and Vemburaj Yadav and Alain Pagani and Yasser Hamidullah and Sepideh Mollanorozy and Cristina España-Bonet and Emil Woop and Patrick Gebhard},
doi = {10.1109/ICASSPW59220.2023.10193194},
year = {2023},
date = {2023-01-01},
booktitle = {2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
pages = {1-5},
abstract = {This paper describes the concept and the software architecture of a fully integrated system supporting a dialog between a deaf person and a hearing person through a virtual sign language interpreter (aka avatar) projected in the real space by an Augmented Reality device. In addition, a Visual Simultaneous Localization and Mapping system provides information about the 3D location of the objects recognized in the surrounding environment, allowing the avatar to orient, look and point towards the real location of discourse entities during the translation. The goal being to provide a modular architecture to test single software components in a fully integrated framework and move virtual sign language interpreters beyond the standard "front-facing" interaction paradigm.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{nunnari23SLTAT-VideoAlignment,
title = {Automatic Alignment Between Sign Language Videos And Motion Capture Data: A Motion Energy-Based Approach},
author = {Fabrizio Nunnari and Mina Ameli and Shailesh Mishra},
doi = {10.1109/ICASSPW59220.2023.10193528},
year = {2023},
date = {2023-01-01},
booktitle = {2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)},
pages = {1-5},
abstract = {In this paper, we propose a method for the automatic alignment of sign language videos and their corresponding motion capture data, useful for the preparation of multi-modal sign language corpora. First, we extract an estimate of the motion energy from both the video and the motion capture data. Second, we align the two curves to minimize their distance. Our tests show that it is possible to achieve a mean absolute error as low as 1.11 frames using optical flow for video energy extraction and a set of 22 bones for skeletal energy extraction.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{Schneeberger2023_FastFriends,
title = {Fast Friends: Generating Interpersonal Closeness between Humans and Socially Interactive Agents},
author = {Tanja Schneeberger and Anna Lea Reinwarth and Robin Wensky and Manuel Silvio Anglet and Patrick Gebhard and Janet Wessler},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
publisher = {Association for Computing Machinery},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@article{schneeberger2023deep,
title = {The Deep Method: Towards Computational Modeling of the Social Emotion Shame driven by Theory, Introspection, and Social Signals},
author = {Tanja Schneeberger and Mirella Hladký and Ann-Kristin Thurner and Jana Volkert and Alexander Heimerl and Tobias Baur and Elisabeth André and Patrick Gebhard},
year = {2023},
date = {2023-01-01},
urldate = {2023-01-01},
journal = {IEEE Transactions on Affective Computing},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@inproceedings{Schneeberger2023_FastFriendsb,
title = {Fast Friends: Generating Interpersonal Closeness between Humans and Socially Interactive Agents},
author = {Tanja Schneeberger and Anna Lea Reinwarth and Robin Wensky and Manuel Silvio Anglet and Patrick Gebhard and Janet Wessler},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
publisher = {Association for Computing Machinery},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@article{schneeberger2023deepb,
title = {The Deep Method: Towards Computational Modeling of the Social Emotion Shame driven by Theory, Introspection, and Social Signals},
author = {Tanja Schneeberger and Mirella Hladký and Ann-Kristin Thurner and Jana Volkert and Alexander Heimerl and Tobias Baur and Elisabeth André and Patrick Gebhard},
year = {2023},
date = {2023-01-01},
journal = {IEEE Transactions on Affective Computing},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@inproceedings{dasilva23IVA-selfawareness,
title = {Visual Similarity for Socially Interactive Agents That Support Self-Awareness},
author = {Claudio Alves da Silva and Bernhard Hilpert and Chirag Bhuvaneshwara and Patrick Gebhard and Fabrizio Nunnari and Dimitra Tsovaltzi},
url = {https://doi.org/10.1145/3570945.3607329},
doi = {10.1145/3570945.3607329},
isbn = {9781450399944},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
publisher = {Association for Computing Machinery},
address = {Würzburg, Germany},
series = {IVA '23},
abstract = {Self-awareness is a critical factor in social interaction. Teachers being aware of their own emotions and thoughts during class may enable reflection and behavioral change. While inducing self-awareness through mirrors or video is common in face-to-face training, it has been scarcely examined in digital training with virtual avatars. This paper examines the relationship between avatar visual similarity and inducing self-awareness in digital training environments. We developed a theory-based methodology to reliably manipulate perceptually relevant facial features of digital avatars based on human-human identification and emotional predisposition. Manipulating these features allows to create personalized versions of digital avatars with varying degrees of visual similarity.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{beyrodt23IVA-BASSF,
title = {Socially Interactive Agents as Cobot Avatars: Developing a Model to Support Flow Experiences and Weil-Being in the Workplace},
author = {Sebastian Beyrodt and Matteo Lavit Nicora and Fabrizio Nunnari and Lara Chehayeb and Pooja Prajod and Tanja Schneeberger and Elisabeth André and Matteo Malosio and Patrick Gebhard and Dimitra Tsovaltzi},
url = {https://doi.org/10.1145/3570945.3607349},
doi = {10.1145/3570945.3607349},
isbn = {9781450399944},
year = {2023},
date = {2023-01-01},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
publisher = {Association for Computing Machinery},
address = {Würzburg, Germany},
series = {IVA '23},
abstract = {This study evaluates a socially interactive agent to create an embodied cobot. It tests a real-time continuous emotional modeling method and an aligned transparent behavioral model, BASSF (boredom, anxiety, self-efficacy, self-compassion, flow). The BASSF model anticipates and counteracts counterproductive emotional experiences of operators working under stress with cobots on tedious tasks. The flow experience is represented in the three-dimensional pleasure, arousal, and dominance (PAD) space. The embodied covatar (cobot and avatar) is introduced to support flow experiences through emotion regulation guidance. The study tests the model's main theoretical assumptions about flow, dominance, self-efficacy, and boredom. Twenty participants worked on a task for an hour, assembling pieces in collaboration with the covatar. After the task, participants completed questionnaires on flow, their affective experience, and self-efficacy, and they were interviewed to understand their emotions and regulation during the task. The results suggest that the dominance dimension plays a vital role in task-related settings as it predicts the participants' self-efficacy and flow. However, the relationship between flow, pleasure, and arousal requires further investigation. Qualitative interview analysis revealed that participants regulated negative emotions, like boredom, also without support, but some strategies could negatively impact well-being and productivity, which aligns with theory.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@article{dauer2023piloting,
title = {Piloting vibration induction for synchrony in urban cycling},
author = {Louisa Dauer and Lara Chehayeb and Mina Ameli and Manuel Anglet and Chirag Bhuvaneshwara and Stefan Schaffer and Esther Zahn and Dimitra Tsovaltzi},
year = {2023},
date = {2023-01-01},
publisher = {GI},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@inproceedings{nunnari23ACII-UnderstandingPAD,
title = {Understanding and mapping pleasure, arousal and dominance social signals to robot-avatar behavior},
author = {Fabrizio Nunnari and Matteo Lavit Nicora and Pooja Prajod and Sebastian Beyrodt and Lara Chehayeb and Elisabeth Andre and Patrick Gebhard and Matteo Malosio and Dimitra Tsovaltzi},
doi = {10.1109/ACIIW59127.2023.10388078},
year = {2023},
date = {2023-01-01},
booktitle = {2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)},
pages = {1-8},
publisher = {IEEE},
address = {Cambridge, MA, USA},
abstract = {We present an analysis of the pleasure, arousal, and dominance social signals inferred from people faces, and how, despite their noisy nature, these can be used to drive a model of theory-based interventions for a robot-avatar agent in a working space. The analysis let emerge clearly the need of data pre-filtering and per-user calibration. The proposed post processing method helps quantifying the parameters needed to control the frequency of intervention of the agent; still leaving the experimenter with a run-time adjustable global control of its sensitivity.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2022
@inproceedings{bernhard22PETRA-AVASAG,
title = {Towards Automated Sign Language Production: A Pipeline for Creating Inclusive Virtual Humans},
author = {Lucas Bernhard and Fabrizio Nunnari and Amelie Unger and Judith Bauerdiek and Christian Dold and Marcel Hauck and Alexander Stricker and Tobias Baur and Alexander Heimerl and Elisabeth André and Melissa Reinecker and Cristina España-Bonet and Yasser Hamidullah and Stephan Busemann and Patrick Gebhard and Corinna Jäger and Sonja Wecker and Yvonne Kossel and Henrik Müller and Kristoffer Waldow and Arnulph Fuhrmann and Martin Misiak and Dieter Wallach},
url = {https://doi.org/10.1145/3529190.3529202},
doi = {10.1145/3529190.3529202},
isbn = {9781450396318},
year = {2022},
date = {2022-01-01},
booktitle = {Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments},
pages = {260–268},
publisher = {Association for Computing Machinery},
address = {Corfu, Greece},
series = {PETRA '22},
abstract = {In everyday life, Deaf People face barriers because information is often only available in spoken or written language. Producing sign language videos showing a human interpreter is often not feasible due to the amount of data required or because the information changes frequently. The ongoing AVASAG project addresses this issue by developing a 3D sign language avatar for the automatic translation of texts into sign language for public services. The avatar is trained using recordings of human interpreters translating text into sign language. For this purpose, we create a corpus with video and motion capture data and an annotation scheme that allows for real-time translation and subsequent correction without requiring to correct the animation frames manually. This paper presents the general translation pipeline focusing on innovative points, such as adjusting an existing annotation system to the specific requirements of sign language and making it usable to annotators from the Deaf communities.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{nunnari2022SLTAT,
title = {A software toolkit for pre-processing sign language video streams},
author = {Fabrizio Nunnari},
year = {2022},
date = {2022-01-01},
booktitle = {Seventh International Workshop on Sign Language Translation and Avatar Technology (SLTAT)},
address = {Marseiile, France},
abstract = {We present the requirements, design guidelines, and the software architecture of an open-source toolkit dedicated to the pre-processing of sign language video material. The toolkit is a collection of functions and command-line tools designed to be integrated with build automation systems. Every pre-processing tool is dedicated to standard pre-processing operations (e.g., trimming, cropping, resizing) or feature extraction (e.g., identification of areas of interest, landmark detection) and can be used also as a standalone Python module. The UML diagrams of its architecture are presented together with a few working examples of its usage. The software is freely available with an open-source license on a public repository.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{deshpande2022SLTAT,
title = {Fine-tuning of convolutional neural networks for the recognition of facial expressions in sign language video samples},
author = {Neha Deshpande and Fabrizio Nunnari and Eleftherios Avramidis},
year = {2022},
date = {2022-01-01},
booktitle = {Seventh International Workshop on Sign Language Translation and Avatar Technology (SLTAT)},
address = {Marseiile, France},
abstract = {In this paper, we investigate the capability of convolutional neural networks to recognize in sign language video frames the six basic Ekman facial expressions for ’fear’, ’disgust’, ’surprise’, ’sadness’, ’happiness’ and ’anger’ along with the ’neutral’ class. Given the limited amount of annotated facial expression data for the sign language domain, we started from a model pre-trained on general-purpose facial expression datasets and we applied various machine learning techniques such as fine-tuning, data augmentation, class balancing, as well as image preprocessing to reach a better accuracy. The models were evaluated using K-fold cross-validation to get more accurate conclusions. Through our experiments we demonstrate that fine-tuning a pre-trained model along with data augmentation by horizontally flipping images and image normalization, helps in providing the best accuracy on the sign language dataset. The best setting achieves satisfactory classification accuracy, comparable to state-of-the-art systems in generic facial expression recognition. Experiments were performed using different combinations of the above-mentioned techniques based on two different architectures, namely MobileNet and EfficientNet, and is deemed that both architectures seem equally suitable for the purpose of fine-tuning, whereas class balancing is discouraged.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@article{nunnari2020AffComp,
title = {Rating Vs. Paired Comparison for the Judgment of Dominance on First Impressions},
author = {Fabrizio Nunnari and Alexis Heloir},
doi = {10.1109/TAFFC.2020.3022982},
year = {2022},
date = {2022-01-01},
journal = {IEEE Transactions on Affective Computing},
volume = {13},
number = {1},
pages = {367-378},
abstract = {This article presents a contest between the rating and the paired comparison voting in judging the perceived dominance of virtual characters, the aim being to select the voting mode that is the most convenient for voters while staying reliable. The comparison consists of an experiment where human subjects vote on a set of virtual characters generated by randomly altering a set of physical attributes. The minimum number of participants has been determined via numerical simulation. The outcome is a sequence of stereotypes ordered along their conveyed amount of submissiveness or dominance. Results show that the two voting modes result in equivalently expressive models of dominance. Further analysis of the voting procedure shows that, despite an initial slower learning phase, after about 30 votes the two modes exhibit the same judging speed. Finally, a subjective questionnaire reports a higher (63.8 percent) preference for the paired comparison mode.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
@inbook{Gebhard2022_Handbook,
title = {Serious Games with SIAs},
author = {Patrick Gebhard and Dimitra Tsovaltzi and Tanja Schneeberger and Fabrizio Nunnari},
editor = {Birgit Lugrin and Catherine Pelachaud and David Traum},
doi = {https://doi.org/10.1145/3563659.3563676},
isbn = {9781450398961},
year = {2022},
date = {2022-01-01},
booktitle = {The Handbook on Socially Interactive Agents: 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics. Volume 2: Interactivity, Platforms, Application},
pages = {527–546},
publisher = {Association for Computing Machinery},
keywords = {},
pubstate = {published},
tppubtype = {inbook}
}
@inproceedings{Wessler2022_Backlash,
title = {Virtual Backlash: Nonverbal Expression of Dominance Leads to Less Liking of Dominant Female versus Male Agents},
author = {Janet Wessler and Tanja Schneeberger and Leon Christidis and Patrick Gebhard},
doi = {https://doi.org/10.1145/3514197.3549682},
year = {2022},
date = {2022-01-01},
booktitle = {Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents},
publisher = {Association for Computing Machinery},
note = {*Best Paper Award*},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
@inproceedings{Heimerl2022_FeedbackJobInterview,
title = {Generating Personalized Behavioral Feedback for a Virtual Job Interview Training System Through Adversarial Learning},
author = {Alexander Heimerl and Silvan Mertes and Tanja Schneeberger and Tobias Baur and Ailin Liu and Linda Becker and Nicolas Rohleder and Patrick Gebhard and Elisabeth André},
doi = {https://doi.org/10.1007/978-3-031-11644-5_67},
year = {2022},
date = {2022-01-01},
booktitle = {Proceedings of the 23rd International Conference on Artificial Intelligence in Education},
pages = {679–684},
publisher = {Springer},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Publications
ReNeLiB: Real-Time Neural Listening Behavior Generation for Socially Interactive Agents Proceedings Article In: Proceedings of the 25th International Conference on Multimodal Interaction, pp. 507–516, Association for Computing Machinery, Paris, France, 2023, ISBN: 9798400700552. Multimodal Recognition of Valence, Arousal and Dominance via Late-Fusion of Text, Audio and Facial Expressions Proceedings Article In: ESANN 2023 proceesdings, pp. 571–576, Ciaco - i6doc.com, Bruges (Belgium) and online, 2023, ISBN: 978-2-87587-088-9. Augmenting Glosses with Geometrical Inflection Parameters for the Animation of Sign Language Avatars Proceedings Article In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 1-5, 2023. Towards Incorporating 3D Space-Awareness Into an Augmented Reality Sign Language Interpreter Proceedings Article In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 1-5, 2023. Automatic Alignment Between Sign Language Videos And Motion Capture Data: A Motion Energy-Based Approach Proceedings Article In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), pp. 1-5, 2023. Fast Friends: Generating Interpersonal Closeness between Humans and Socially Interactive Agents Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, 2023. The Deep Method: Towards Computational Modeling of the Social Emotion Shame driven by Theory, Introspection, and Social Signals Journal Article In: IEEE Transactions on Affective Computing, 2023. Fast Friends: Generating Interpersonal Closeness between Humans and Socially Interactive Agents Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, 2023. The Deep Method: Towards Computational Modeling of the Social Emotion Shame driven by Theory, Introspection, and Social Signals Journal Article In: IEEE Transactions on Affective Computing, 2023. Visual Similarity for Socially Interactive Agents That Support Self-Awareness Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, Würzburg, Germany, 2023, ISBN: 9781450399944. Socially Interactive Agents as Cobot Avatars: Developing a Model to Support Flow Experiences and Weil-Being in the Workplace Proceedings Article In: Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, Würzburg, Germany, 2023, ISBN: 9781450399944. Piloting vibration induction for synchrony in urban cycling Journal Article In: 2023. Understanding and mapping pleasure, arousal and dominance social signals to robot-avatar behavior Proceedings Article In: 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), pp. 1-8, IEEE, Cambridge, MA, USA, 2023. Towards Automated Sign Language Production: A Pipeline for Creating Inclusive Virtual Humans Proceedings Article In: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments, pp. 260–268, Association for Computing Machinery, Corfu, Greece, 2022, ISBN: 9781450396318. A software toolkit for pre-processing sign language video streams Proceedings Article In: Seventh International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Marseiile, France, 2022. Fine-tuning of convolutional neural networks for the recognition of facial expressions in sign language video samples Proceedings Article In: Seventh International Workshop on Sign Language Translation and Avatar Technology (SLTAT), Marseiile, France, 2022. Rating Vs. Paired Comparison for the Judgment of Dominance on First Impressions Journal Article In: IEEE Transactions on Affective Computing, vol. 13, no. 1, pp. 367-378, 2022. Serious Games with SIAs Book Chapter In: Lugrin, Birgit; Pelachaud, Catherine; Traum, David (Ed.): The Handbook on Socially Interactive Agents: 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics. Volume 2: Interactivity, Platforms, Application, pp. 527–546, Association for Computing Machinery, 2022, ISBN: 9781450398961. Virtual Backlash: Nonverbal Expression of Dominance Leads to Less Liking of Dominant Female versus Male Agents Proceedings Article In: Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents, Association for Computing Machinery, 2022, (*Best Paper Award*). Generating Personalized Behavioral Feedback for a Virtual Job Interview Training System Through Adversarial Learning Proceedings Article In: Proceedings of the 23rd International Conference on Artificial Intelligence in Education, pp. 679–684, Springer, 2022.2023
2022