Buenaventura IEEE Engineering in Medicine and Biology

(Canceled due to #EasyFire) AI-based Multi-modal Human-robot Interaction

Wednesday, October 30, 2019 at 7 PM
CLU Gilbert Sports and Fitness Center, 130 Overton Court, rooms 253/254 (second floor)

There is a growing need for service robots that can support independent living of the elderly and people with disabilities, as well as robots that can assist human workers in a warehouse or on a factory floor. However, robots that collaborate with humans should act predictably and ensure that the interaction is safe and effective. Therefore, when humans and robots collaborate for example during Activities of Daily Living (ADLs), robots should be able to recognize human actions and intentions and produce appropriate responses. To do so, it is crucial to understand how two humans interact during a collaborative task and how they perform them. Humans employ multiple communication modalities when engaging in collaborative activities; similarly, service robots require information from multiple sensors to plan their actions based on the interaction and the task states.

Service robots for the elderly require information from multiple modalities to maintain active interaction with a human during performing interactive tasks. We study in detail the scenario where a human and a service robot collaborate to find an object (Find Task) in the kitchen so it can be used in a subsequent task such as cooking. Based on the data collected during human studies, we develop an Interaction Manager which allows the robot to actively participate in the interaction and plan its next action given human spoken utterances, observed manipulation actions, and gestures. We develop multiple modules for a robot in the Robot Operating System (ROS), including H-O action recognition using vision, gesture recognition using vision, speech recognition using the Google speech recognition API, a dialogue tool which includes a multimodal dialogue act (DA) classifier that determines the intention of the speaker, and the Interaction Manager itself. The proposed system is validated using two different robot platforms: a Baxter robot and a Nao robot. The preliminary user study provides the evidence that by using the developed multimodal Interaction Manager, the robot can successfully interact with the human in the Find Task.

Dr. Bahareh Abbasi

Dr. Bahareh Abbasi received her B.S. degree in Electrical Engineering from University of Tehran, Iran, in 2013. She received her Ph.D. in Electrical and Computer Engineering from University of Illinois at Chicago, where she also completed her M.S. in ECE. She has joined CCC Information Services, Inc. as a research and development engineer intern since 2017. Her primary research interests include robotics, haptics, multimodal human–robot interaction, and machine learning. Her works have been published in prestigious conferences and journals such as TRO, ICRA, IROS, ROMAN, and ISMR.


Meeting Site: California Lutheran University Gilbert Sports and Fitness Center,
Second Floor, rooms 253/254, 130 Overton Court, Thousand Oaks, CA.
Meetings are free, and open to the public
Dinner: Available at 6 p.m. for $12 payable at the door, no RSVP needed.
Parking: Parking is free outside of the Gilbert Sports Center
Contact: Steve Johnson, sfjohnso@ieee.org
Our Sponsors: La Reina High School and Middle SchoolCalifornia Lutheran UniversityIEEE EMB SocietyIEEE Buenaventura Section