Modeling and Design Analysis of Facial Expressions of Humanoid Social Robots


I am serving as a member of Shweta Murthy’s Graduate Supervisory Committee. The committee includes Dr. Ashraf Gaffar (chair), Dr. Arbi Ghazarian (member), and myself.

Thesis defense is on April 13, 2017, 04.00 pm MST, Peralta Hall room 230W

Abstract

A lot of research can be seen in the field of social robotics that majorly concentrate on various aspects of social robots including design of mechanical parts and their movement, cognitive speech and face recognition capabilities. Several robots have been developed with the intention of being social, like humans, without much emphasis on how human-like they actually look, in terms of expressions and behavior. Furthermore, a substantial disparity can be seen in the success of results of any research involving ”humanizing” the robots’ behavior, or making it behave more human-like as opposed to research into biped movement, movement of individual body parts like arms, fingers, eyeballs, or human-like appearance itself. The research in this paper involves understanding why the research on facial expressions of social humanoid robots fails where it is not accepted completely in the current society owing to the uncanny valley theory. This paper identifies the problem with the current facial expression research as an information retrieval problem. This paper identifies the current research method in the design of facial expressions of social robots, followed by using deep learning as similarity evaluation technique to measure the humanness of the facial expressions developed from the current technique and further suggests a novel solution to the facial expression design of humanoids using deep learning.