I am presenting a short talk at the 42nd Conference of Research and Development by Tecnologico de Monterrey (CIDTEC) in Monterrey, Nuevo Leon, Mexico. January 2012.
Computer’s ability to recognize human emotional states given physiological signals is gaining popularity to create empathetic systems such as learning environments, health care systems and videogames. Despite that, there are few frameworks, libraries, architectures, or software tools, which allow systems developers to integrate emotion recognition into their projects. This work offers a first step to fill the gap in the lack of frameworks and models, addressing: (a) the modeling of an agent-driven component-based architecture for multimodal emotion recognition called ABE, and (b) the use of ABE to implement a multimodal emotion recognition framework to support third-party systems becoming empathetic systems. In this paper we presented ABE as our proposal for a multimodal emotion recognition framework that supports the creation of empathetic systems. This work is rooted in an agent-based approach under a multilayer-distributed architecture oriented to create highly reusable, flexible and extensible software components. We have achieved the integration of both novel and well-known sensing devices into ABE including brain computer interfaces, eye tracking systems, computer vision systems and physiological sensors. We illustrated the use of ABE in practice, building software for two different scenarios: one was a gaming study in which we sensed emotional status of students while playing a video game, and a second one into an affective tutoring system development project. In both cases was seen that the integration of ABE was a reasonably easy experience with good performance results. The next steps for ABE are focused on: (a) refactoring components (b) deploy API’s documentation, (c) adding support agents such as loggers and visualizers to conform a dashboard interface. Beside that, looking into test-case scenarios for reactive systems has become more relevant to maintain latency in a useful level for real-time interaction, therefore integration of parallel and multicore computing models is also in our list of next steps. This research was supported by Office of Naval Research under Grant N00014-10-1-0143 awarded to Dr. Robert Atkinson.
Ninth Working IEEE/IFIP Conference on Software Architecture.
Gonzalez-Sanchez J., Chavez-Echeagaray M., Atkinson R., Burleson W. (2012). ABE: An Agent-Based Software Architecture for a Multimodal Emotion Recognition Framework. Companion of the 42nd Conference of Research and Development by Tecnologico de Monterrey. Monterrey, Nuevo Leon, Mexico. January 2012. Page 204. ISBN: 978-607-501-073-1