Approach/Description
Human robot interaction requires abounding robust motions that need to be concurrently enacted, coordinated, and conducted while, all along, ensuring not only full safety to human users, but also human acceptance.
Specifically, while robots for the home environment have already proven successful in commercial products (e.g. Roomba vacuum cleaner), these are usually single-tasked, or not fully socially competent.
Yet, visions of human service robots that would be sophisticated enough to ensure their acceptance in humans’ private homes abound: i.e. robots whose range of motions (and planning of) not only enable them to conduct typical domestic tasks (e.g. vacuuming rooms, grasping and delivering beverages at cock- tails parties), but enable them to do so in a socially appropriate manner. Indeed, in order for robots to be socially appropriate in human private spaces, behaving using similar patterns found in human-human social interaction and communication is likely to feel more natural, pleasant, and easy to interact with for humans, than otherwise. For example, if a human user addresses it while it’s performing its task to clean a table, a robot might need to engage in new motions concurrent to motions necessary for conducting its ”main” task: e.g. to turn slightly towards the user to express interest (at the very least), to nod to express understanding, to establish eye contact to convey a sense of commitment, among other socially appropriate behaviors.
We propose to conduct research on Toyota HSR to advance the state of the art in social human service robots, for which RoboCup@Home represents one of the best test bed environments. Our human-robotic interaction research involves topics in socially intelligent user interfaces and robotics. Our approach is to leverage latest progress in affective social computing and socially intelligent agents, and in AI robotics to address RoboCup@Home challenges.
Indeed, Toyota’s HSR, with its multi-DOF arm, full mobility, high-end sound, microphone array in this noisy environment for voice recognition, and computer display is an excellent platform to embody the integration of UM’s RoboCanes agent with the FIU VISAGE agent. The RoboCanes agent will be mainly responsible for managing and controlling navigation, object manipulation, grasping etc., while the VISAGE agent will handle the human’s automatic facial and facial expression recognition, voice recognition and synthesis, and 3D-graphics facial and gesture synthesis. Integration and coordination of both agents toward a coherent and engaging multimodal model of communication with the human user will be conducted in tandem by our collaborating team of researchers, both conveniently located in Miami, at the University of Miami and at Florida International University.