tutorial (ICALT 2011) How to Do Multimodal Detection of Affective States?

IEEE International Conference on Advanced Learning Technologies
Athens, Georgia, USA. July 2011
Tutorial.

 
 
Abstract

The computer’s ability to recognize human emotional states given physiological signals is gaining in popularity to create empathetic systems such as learning environments, health care systems and videogames. Despite that, there are few frameworks, libraries, architectures, or software tools, which allow systems developers to easily integrate emotion recognition into their software projects. The work reported here offers a first step to fill this gap in the lack of frameworks and models, addressing: (a) the modeling of an agent-driven component-based architecture for multimodal emotion recognition, called ABE, and (b) the use of ABE to implement a multimodal emotion recognition framework to support third-party systems becoming empathetic systems.

Slides

These are my slides for the tutorial, any comment is more than welcome.

Reference

Gonzalez-Sanchez, J., Christopherson R., Chavez-Echeagaray M.E., Gibson D., Atkinson R., and Burleson W. (2011). How to Do Multimodal Detection of Affective States? Proceedings of the 11th IEEE International Conference on Advanced Learning Technologies (ICALT) 2011. Athens, Georgia, USA. July 2011. IEEE, pp 654-655. ISSN: 2161-3761. doi:10.1109/ICALT.2011.206.