Skip to content →

Layoff Agent

 

 

The “Layoff Training Agent” has been a project inside the interdisciplinary seminar “How to build a social computer” and helps the training of responsible managers in a company on how to effectively fire people in a layoff scenario. The project was conceptualized and implemented by a group of students of Saarland University, which consists of 3 computer scientists {Alvaro, Nils, Hamza} and 3 psychologists {Bernhard, Hong, Sofie} under the supervision of Dr. Patrick Gebhard and M.Sc. Tanja Schneeberger and under the umbrella of the DFKI.

 

 

Concept:

The “Layoff Training Agent” is built in the Visual Scene Maker, consisting of a script of a layoff situation, based on current knowledge about the layoff process. The user can choose from a set of options to conduct the interview and interact with the employee character.The characters’ reactions are based on an underlying emotional model, simulating the characters’ emotional situation and determining his reactions. This emotional model takes into account a “Situational Value” which represents the mood or the atmosphere during the interview and a “Momentum Value” representing the emotional potential of a certain sentence directed to the employee character. The two values are combined to an “Emotional State Value” for each of the two relevant emotional dimensions, namely “Consternation” and “Anger” in the layoff interview. Depending on how which option the user chooses and how he conducts the interview, the characters’ emotional state varies within the dimension and therefore provides the fundament for a set of verbal and facial reactions. At the end, the user is provided with a feedback on his behavior.
The “Layoff Training Agent” is also integrated with the Google speech recognition tool, which allows the user to speak the options out loud makes the training more realistic. Furthermore, there is Marytts for the purposes of interaction with the model and the trainee (built with Stickman). All the components interact via TCP/IP.

 

Components:

• Visual SceneMaker
• Pinocchio (3D character)
• Speech recognition – Google Speech
• User option selection tool

Visual SceneMaker is an authoring tool for creating interactive presentations aimed to non-programming experts. It supports the modeling of the verbal and non-verbal behavior of Virtual Characters and robots. Therefore, it provides users with a graphical interface and a simple scripting language that allows them to create rich and compelling content.

We created an external application that receives the possible options that a user can say in a given moment. The user can read the options at loud or speak out a sentence with the suggested words from the option. The system will automatically select, from the spoken text, and using the Google Speech Recognition API the right answer. The spoken sentence doesn’t have to be exactly the same as the option, as long as the user uses part of the words displayed in the options, the system will select the right one.
In case the speech recognition does not work (ex. the user uses the system offline, problems with the API, etc), the user can always click on on the right option.

 

Several options available to the user