MSECP Banner

Welcome to MSECP-Wild@ICMI2023!

The 5th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild

 

 

 

Description

The ability to automatically estimate human users' thoughts and feelings during interactions is crucial for adaptive intelligent technology (e.g., social robots or tutoring systems). Not only can it improve user understanding, but it also holds the potential for novel scientific insights. However, creating robust models for predictions and adaptation in real-world applications remains an open problem.

The MSECP-Wild workshop series discusses this challenge in a multidisciplinary forum. This workshop iteration will put a thematic focus on ethical considerations when developing technology for inferring and responding to internal states in the wild (e.g., privacy, consent, or bias). As such, apart from contributions to overcoming technical and conceptual challenges for this type of multimodal analysis in general, we particularly encourage the submission of work that facilitates understanding and addressing ethical challenges in the wild. Overall, we aim for a program providing important impulses for discussions of the state-of-the-art and opportunities for future research.

Topics of interest include (but are not limited to) theoretical and empirical contributions concerning

  • The ethics of building and applying multimodal modeling for human-computer and/or human-robot interaction.
  • Modelling cognitive, affective, and social states from behavioral (e.g., facial expressions or gestures) and biological signals (e.g., EEG, EDA, EMG, HR) using multimodal machine learning.
  • Multimodal modal modeling of cognitive-affective processes (e.g., engagement, attention, stress, memory, or workload).
  • Modeling internal user states during social interactions (e.g., in group settings).
  • New approaches and robustness of machine learning on multimodal data (e.g., ablation studies).
  • Context-sensitive and Context-based estimation of cognitive-affective user states (e.g., detecting and integrating context features or domain adaptation approaches).
  • Affect or Cognition-adaptive human-computer interfaces and human-robot interactions (e.g.social robotics).
  • Studies on multimodal data bridging between the laboratory and the wild or otherwise broadly different populations or situations.
  • Multimodal data sets for modeling socio-emotional and cognitive processes (especially corpora spanning different contextual settings).

 

 

 

Important Dates

All deadlines are set at 23:59 PDT (GMT-7):

  • Submissions: 23 July, 2023
  • Notification to Authors: 8 August, 2023
  • Camera-ready: 13 August, 2023
  • Workshop date: 09 October

 

 

 

Submission

We invite submissions of the following types for presentation at the workshop:

  • Long paper: maximum length is 8 pages (excl. references)
  • Short paper: maximum length is 4 pages (excl. references)

Reviews will be single-blind, i.e., articles should not be anonymized, and submitted in PDF file format. In all other aspects, submissions should follow the ICMI author guidelines.

Each paper will be sent to at least two expert reviewers and will have one of the organizers assigned as editor. A sufficient number of external reviewers from all areas has been identified at the participating institutions and in the network of the organizers.

Papers can be submitted via the Microsoft Conference Management Toolkit here: https://cmt3.research.microsoft.com/MSECPWild2023

Submission to the workshop is closed.

 

 

 

Program

The workshop takes place on October 09, 2023, in Room 108 at Sorbonne University, Campus Pierre & Marie Curie, located at the center of Paris: 4, Place Jussieu, 75005, Paris

See here for a description: https://icmi.acm.org/2023/Conference/

All times are CEST (Central European Summer Time , UTC+2:00h), i.e., the local time for ICMI in-person attendees in Paris, France.

09:00-09:10 (10min) 

Welcome Note

09:10-09:55 (45min)

Invited Talk: Dr. Jonathan Gratch
Reasoning about emotional expressions in context

 

09:55-10:30 (35min)

Brainstorming in Groups

10:30-10:45 (15min)

Coffee Break

10:45-11:15 (30min)

Brainstorming in Groups

11:15-11:25 (10min)

Paper Blitz Talks -- Session 1

  • Guidelines for designing and building an automated multimodal textual annotation system
    Joshua Kim (University of Sydney)*; Kalina Yacef (University of Sydney)
  • GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction Videos
    Garima Sharma (Monash University)*; Shreya Ghosh (Curtin University); Abhinav Dhall (Indian Institute of Technology Ropar); Munawar Hayat (Monash University); Jianfei Cai (Monash University); Tom Gedeon (Curtin University

11:25-11:45 (20min)

Paper Long Talks

  • Multimodal Entrainment in Bio-Responsive Multi-User VR Interactives
    Steve DiPaola (Simon Fraser University)*

 

11:45-13:30 (105min)

 

 

Lunch Break

 

13:30-14:15 (45min)

Invited Talk: Dr. Theodora Chaspari (Virtual)
Challenges and opportunities in fostering trustworthy machine learning for multimodal data modeling in-the-wild

14:15-15:00 (45min)

Brainstorming in Groups

15:00-15:15 (15min)

Coffee Break

15:15-15:30 (15min)

Paper Blitz Talks -- Session 2

  • SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal
    Auriane Boudin (ILCB)*; Roxane Bertrand (LPL); Stéphane Rauzy (LPL); Matthis Houlès (ILCB); Thierry Legou (ILCB); Magalie Ochs (ILCB); Philippe Blache (ILCB)
  • Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
    Theo Deschamps-Berger (Paris-Saclay University, CNRS-LISN)*; Lori Lamel (CNRS LISN); Laurence Y. Devillers (LISN-CNRS
  • A multi-tasking multi-modal approach for predicting discrete and continuous emotions
    Alex-Răzvan Ispas (LISN-CNRS)*; Laurence Y. Devillers (LISN-CNRS)

15:30-16:15 (45min)

Poster Presentationa

  • SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal
    Auriane Boudin (ILCB)*; Roxane Bertrand (LPL); Stéphane Rauzy (LPL); Matthis Houlès (ILCB); Thierry Legou (ILCB); Magalie Ochs (ILCB); Philippe Blache (ILCB)
  • Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
    Theo Deschamps-Berger (Paris-Saclay University, CNRS-LISN)*; Lori Lamel (CNRS LISN); Laurence Y. Devillers (LISN-CNRS
  • A multi-tasking multi-modal approach for predicting discrete and continuous emotions
    Alex-Răzvan Ispas (LISN-CNRS)*; Laurence Y. Devillers (LISN-CNRS)
  • Guidelines for designing and building an automated multimodal textual annotation system (Virtual)
    Joshua Kim (University of Sydney)*; Kalina Yacef (University of Sydney)
  • GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction Videos (Virtual)
    Garima Sharma (Monash University)*; Shreya Ghosh (Curtin University); Abhinav Dhall (Indian Institute of Technology Ropar); Munawar Hayat (Monash University); Jianfei Cai (Monash University); Tom Gedeon (Curtin University

16:15-16:30 (15min)

Plenary Discussion

16:30-16:40 (10min)

Closing

 

 

 

 

Invited Speakers

Jonathan Gratch

Jonathan Gratch (USC Institute for Creative Technologies)

Jonathan Gratch is Director for Virtual Human Research at the University of Southern California’s (USC) Institute for Creative Technologies, a Research Full Professor of Computer Science and Psychology at USC and Director of USC’s Computational Emotion Group. He completed his Ph.D. in Computer Science at the University of Illinois in Urbana-Champaign in 1995. Dr. Gratch’s research focuses on computational models of human cognitive and social processes, especially emotion, and explores these models’ role in shaping human-computer interactions in virtual environments. In particular, he studies the relationship between cognition and emotion, the cognitive processes underlying emotional responses, and the influence of emotion on decision making and physical behavior. He is the founding Editor-in-Chief of IEEE’s Transactions on Affective Computing (retired), Associate Editor of Affective Science, Emotion Review and the Journal of Autonomous Agents and Multiagent Systems, and former President of the Association for the Advancement of Affective Computing (AAAC). He is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), AAAC and the Cognitive Science Society, a SIGART Autonomous Agent’s Award recipient, and a Senior Member of IEEE. Dr. Gratch is the author of over 300 technical articles.

Talk: Reasoning about emotional expressions in context
Affective computing has long focused on recognizing emotion, often focusing on multiple modalities, but often focusing on a single individual (even when they are engaged in a social task) and frequently ignoring the social or task context that surrounds the expression. In this talk, I will discuss several recent projects from examining the social functions of facial expressions, spontaneously produced, within the context of social tasks. I will illustrate how the interpretation of such expressions is strongly shaped by this context, and by leveraging recent developments in large language models, it is possible to integrate evidence from expressions and situations to improve recognition accuracy and yield insight into the interpretation and social function of emotional displays in interdependent tasks.

 

 

 

Theodora Chaspari

Theodora Chaspari (Texas A&M University)

Theodora Chaspari is an Assistant Professor in the Computer Science & Engineering Department at Texas A&M University. She received her Bachelor of Science (2010) in Electrical & Computer Engineering from the National Technical University of Athens, Greece and her Master of Science (2012) and Ph.D. (2017) in Electrical Engineering from the University of Southern California. Theodora’s research interests lie in the areas of health analytics, affective computing, data science, and machine learning. She is a recipient of the NSF CAREER Award (2021), and papers co-authored with her students have been nominated and won awards at the ASC 2021, ACM BuildSys 2019, IEEE ACII 2019, ASCE i3CE 2019, and IEEE BSN 2018 conferences. She is serving as an Editor of the Elsevier Computer Speech & Language and Guest Editor in IEEE Transactions on Affective Computing. Her work is supported by federal and private funding sources, including the NSF, NIH, NASA, IARPA, AFRL, AFOSR, General Motors, Keck Foundation, and Engineering Information Foundation.

Talk: Challenges and opportunities in fostering trustworthy machine learning for multimodal data modeling in-the-wild
Challenges and opportunities in fostering trustworthy machine learning for multimodal data modeling in-the-wild The growing prevalence of smartphones and wearable devices has made it possible to monitor human conditions beyond laboratory settings, resulting in the collection of real-world, multimodal, longitudinal data. This data forms the basis for creating automated algorithms that can track an individual's internal and contextual conditions. However, developing machine learning (ML) models using real-world data centered on humans poses distinct computational and ethical challenges. This presentation will discuss the challenges and potential mitigation strategies associated with modeling socio-emotional and cognitive processes using multi-modal data, including issues such as the inherent inter-individual variability, the varying occurrence rates of specific events, and concerns and strategies related to safeguarding personally identifiable information and addressing potential biases in ML outcomes. Additionally, the talk explores approaches to designing explainable ML models and ways for healthcare professionals to interact with these models to make informed decisions effectively. The societal and ethical implications of this research will be discussed, particularly concerning the domains of mental health and emotional well-being.

 

 

 

Organizing Committee

BerndDudzik
Bernd Dudzik
Delft University of Technology
Tiffany Matej Hrkalovic
Tiffany Matej Hrkalovic
Free University Amsterdam
Dennis Kuester
Dennis Küster
University of Bremen
DavidSt-Onge
David St-Onge
École de technologie supérieure
Felix Putze
Felix Putze
University of Bremen
Laurence Devillers
Laurence Devillers
LISN-CNRS/Sorbonne University

 

 

 

 

Program Committee