MSECP-Wild@ICMI2023

MSECP Banner

Welcome to MSECP-Wild@ICMI2023!

The 5th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild

 

 

 

Description

The ability to automatically estimate human users' thoughts and feelings during interactions is crucial for adaptive intelligent technology (e.g., social robots or tutoring systems). Not only can it improve user understanding, but it also holds the potential for novel scientific insights. However, creating robust models for predictions and adaptation in real-world applications remains an open problem.

The MSECP-Wild workshop series discusses this challenge in a multidisciplinary forum. This workshop iteration will put a thematic focus on ethical considerations when developing technology for inferring and responding to internal states in the wild (e.g., privacy, consent, or bias). As such, apart from contributions to overcoming technical and conceptual challenges for this type of multimodal analysis in general, we particularly encourage the submission of work that facilitates understanding and addressing ethical challenges in the wild. Overall, we aim for a program providing important impulses for discussions of the state-of-the-art and opportunities for future research.

Topics of interest include (but are not limited to) theoretical and empirical contributions concerning

  • The ethics of building and applying multimodal modeling for human-computer and/or human-robot interaction.
  • Modelling cognitive, affective, and social states from behavioral (e.g., facial expressions or gestures) and biological signals (e.g., EEG, EDA, EMG, HR) using multimodal machine learning.
  • Multimodal modal modeling of cognitive-affective processes (e.g., engagement, attention, stress, memory, or workload).
  • Modeling internal user states during social interactions (e.g., in group settings).
  • New approaches and robustness of machine learning on multimodal data (e.g., ablation studies).
  • Context-sensitive and Context-based estimation of cognitive-affective user states (e.g., detecting and integrating context features or domain adaptation approaches).
  • Affect or Cognition-adaptive human-computer interfaces and human-robot interactions (e.g.social robotics).
  • Studies on multimodal data bridging between the laboratory and the wild or otherwise broadly different populations or situations.
  • Multimodal data sets for modeling socio-emotional and cognitive processes (especially corpora spanning different contextual settings).

 

 

 

Important Dates

All deadlines are set at 23:59 PDT (GMT-7):

  • Submissions: 23 July, 2023
  • Notification to Authors: 8 August, 2023
  • Camera-ready: 13 August, 2023
  • Workshop date: TBD (09 October or 13 October 2023)

 

 

 

Submission

We invite submissions of the following types for presentation at the workshop:

  • Long paper: maximum length is 8 pages (excl. references)
  • Short paper: maximum length is 4 pages (excl. references)

Reviews will be single-blind, i.e., articles should not be anonymized, and submitted in PDF file format. In all other aspects, submissions should follow the ICMI author guidelines.

Each paper will be sent to at least two expert reviewers and will have one of the organizers assigned as editor. A sufficient number of external reviewers from all areas has been identified at the participating institutions and in the network of the organizers.

Papers can be submitted via the Microsoft Conference Management Toolkit here: https://cmt3.research.microsoft.com/MSECPWild2023

 

 

 

Program

The workshop takes place on October 09, 2023, at Sorbonne University, Campus Pierre & Marie Curie, located at the center of Paris: 4, Place Jussieu, 75005, Paris

All times are CEST (Central European Summer Time , UTC+2:00h), i.e., the local time for ICMI in-person attendees in Paris, France.

09:00-09:10 (10min) 

Welcome Note

09:10-09:55 (45min)

Invited Talk: Dr. Jonathan Gratch

09:55-10:30 (35min)

Brainstorming in Groups

10:30-10:45 (15min)

Coffee Break

10:45-11:00 (15min)

Brainstorming in Groups

11:00-11:15 (15min)

Paper Blitz Talks -- Session 1

  • Guidelines for designing and building an automated multimodal textual annotation system
    Joshua Kim (University of Sydney)*; Kalina Yacef (University of Sydney)
  • SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal
    Auriane Boudin (ILCB)*; Roxane Bertrand (LPL); Stéphane Rauzy (LPL); Matthis Houlès (ILCB); Thierry Legou (ILCB); Magalie Ochs (ILCB); Philippe Blache (ILCB)
  • Multimodal Entrainment in Bio-Responsive Multi-User VR Interactives
    Steve DiPaola (Simon Fraser University)*

11:15-12:00 (45min)

Poster Presentation -- Session 1

  • Guidelines for designing and building an automated multimodal textual annotation system
    Joshua Kim (University of Sydney)*; Kalina Yacef (University of Sydney)
  • SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal
    Auriane Boudin (ILCB)*; Roxane Bertrand (LPL); Stéphane Rauzy (LPL); Matthis Houlès (ILCB); Thierry Legou (ILCB); Magalie Ochs (ILCB); Philippe Blache (ILCB)
  • Multimodal Entrainment in Bio-Responsive Multi-User VR Interactives
    Steve DiPaola (Simon Fraser University)*

 

12:00-13:30 (90min)

 

 

Lunch Break

 

13:30-14:15 (45min)

Invited Talk: Dr. Theodora Chaspari (Virtual)

14:15-15:00 (45min)

Brainstorming in Groups

15:00-15:15 (15min)

Coffee Break

15:15-15:30 (15min)

Paper Blitz Talks -- Session 2

  • GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction Videos
    Garima Sharma (Monash University)*; Shreya Ghosh (Curtin University); Abhinav Dhall (Indian Institute of Technology Ropar); Munawar Hayat (Monash University); Jianfei Cai (Monash University); Tom Gedeon (Curtin University
  • Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
    Theo Deschamps-Berger (Paris-Saclay University, CNRS-LISN)*; Lori Lamel (CNRS LISN); Laurence Y. Devillers (LISN-CNRS
  • A multi-tasking multi-modal approach for predicting discrete and continuous emotions
    Alex-Răzvan Ispas (LISN-CNRS)*; Laurence Y. Devillers (LISN-CNRS)

15:30-16:15 (45min)

Poster Presentation -- Session 2

  • GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction Videos
    Garima Sharma (Monash University)*; Shreya Ghosh (Curtin University); Abhinav Dhall (Indian Institute of Technology Ropar); Munawar Hayat (Monash University); Jianfei Cai (Monash University); Tom Gedeon (Curtin University
  • Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
    Theo Deschamps-Berger (Paris-Saclay University, CNRS-LISN)*; Lori Lamel (CNRS LISN); Laurence Y. Devillers (LISN-CNRS
  • A multi-tasking multi-modal approach for predicting discrete and continuous emotions
    Alex-Răzvan Ispas (LISN-CNRS)*; Laurence Y. Devillers (LISN-CNRS)

16:15-16:30 (15min)

Plenary Discussion

16:30-16:40 (10min)

Closing

 

 

 

 

Invited Speakers

Jonathan Gratch

Jonathan Gratch (USC Institute for Creative Technologies)

Jonathan Gratch is Director for Virtual Human Research at the University of Southern California’s (USC) Institute for Creative Technologies, a Research Full Professor of Computer Science and Psychology at USC and Director of USC’s Computational Emotion Group. He completed his Ph.D. in Computer Science at the University of Illinois in Urbana-Champaign in 1995. Dr. Gratch’s research focuses on computational models of human cognitive and social processes, especially emotion, and explores these models’ role in shaping human-computer interactions in virtual environments. In particular, he studies the relationship between cognition and emotion, the cognitive processes underlying emotional responses, and the influence of emotion on decision making and physical behavior. He is the founding Editor-in-Chief of IEEE’s Transactions on Affective Computing (retired), Associate Editor of Affective Science, Emotion Review and the Journal of Autonomous Agents and Multiagent Systems, and former President of the Association for the Advancement of Affective Computing (AAAC). He is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), AAAC and the Cognitive Science Society, a SIGART Autonomous Agent’s Award recipient, and a Senior Member of IEEE. Dr. Gratch is the author of over 300 technical articles.

 

 

 

Theodora Chaspari

Theodora Chaspari (Texas A&M University)

Theodora Chaspari is an Assistant Professor in the Computer Science & Engineering Department at Texas A&M University. She received her Bachelor of Science (2010) in Electrical & Computer Engineering from the National Technical University of Athens, Greece and her Master of Science (2012) and Ph.D. (2017) in Electrical Engineering from the University of Southern California. Theodora’s research interests lie in the areas of health analytics, affective computing, data science, and machine learning. She is a recipient of the NSF CAREER Award (2021), and papers co-authored with her students have been nominated and won awards at the ASC 2021, ACM BuildSys 2019, IEEE ACII 2019, ASCE i3CE 2019, and IEEE BSN 2018 conferences. She is serving as an Editor of the Elsevier Computer Speech & Language and Guest Editor in IEEE Transactions on Affective Computing. Her work is supported by federal and private funding sources, including the NSF, NIH, NASA, IARPA, AFRL, AFOSR, General Motors, Keck Foundation, and Engineering Information Foundation.

 

 

 

Organizing Committee

BerndDudzik
Bernd Dudzik
Delft University of Technology
Tiffany Matej Hrkalovic
Tiffany Matej Hrkalovic
Free University Amsterdam
Dennis Kuester
Dennis Küster
University of Bremen
DavidSt-Onge
David St-Onge
École de technologie supérieure
Felix Putze
Felix Putze
University of Bremen
Laurence Devillers
Laurence Devillers
LISN-CNRS/Sorbonne University

 

 

 

 

Program Committee