Problem

Research

There is much work being done in robotics regarding the capability of robots to recognize and interpret human gesture. If a robot can use human movements as input information to coordinate its actions and anticipate our needs, a critical aspect to sharing our spaces with autonomous vehicles and having them better serve humanity is fullfilled. Gesture recognition is especially valuable in applications involving human/robot interactions for several reasons, "First, it provides a redundant form of communication between the user and the robot. Second, gestures are an easy way to give geometric information to the robot. Rather than give coordinates to where the robot should move, the user can simply point to a spot on the floor."(Sebastian Thrun (PI), Roseli Romero, Stefan Waldherr, Dimitris Margaritis)

 

MIT Media Laboratory has been a key player in the 'robotic revolution' and the role of gesture in robotics has been explored through several projects. Several years old now, the Kismet autonomous robot "engages people in natural and expressive face-to-face interaction". A more recent version (not quite autonomous yet), Leonardo incorporates animatronic artistry as part of the overall design and with a remarkable 61 degrees of freedom is undoubtably the most expressive robot today. (movie)

Research such as this represents the leading edge of expressiveness in robotics, however it focusses on extremely advanced robotics and emulating as much human expressivity as possible. For our project we are more interested in the role gesture could play in more simple service robots that are appearing on the market today such as the Roomba.

Very little work has been attempted to imbue more simple, task oriented robots such as this with a physical language that humans would naturally and quickly be able to respond to.

Two examples we found of work approaching this area are from Carnegie Mellon and MIT Media Laboratory respectively. Bruce, Nourbakhsh, and Simmons at Carnegie Mellon have "performed an experiment on the effects of a specific form of expressiveness and attention on people's interest to engage in a social interaction with a mobile robot."

While Liu and Picard at MIT are attempting to build "a Robotic Computer, which moves its monitor "head" and "neck" but has no explicit face, is being designed to interact with users in a natural way for applications such as learning, rapport-building, interactive teaching, and posture improvement."

 

Gestures

One of the difficulties with interpreting the gestures of an autonomous vehicle is the subjectivity of those that perceive its movements. For example,what one person sees as a friendly nod could be perceived by someone else as a threatening glare. A downward glance to the left could be an attempt to ignore another person or the seguey to a change in direction.

"Often it's forgotten that much of communication between people actually takes place non-verbally through gestures and facial movements." Bob Mottram, 2002 (Creator of 'Rodney')

Yet, when considering the navigation tasks a person must perform to successfully traverse a crowded public space, a common repertoire of signals could be identified (this would be a future research task to support some of our statements.)

 

Scenario 1: Rushing through the corridors of the SFU Surrey Campus, our eager young student Maggie is trying desperately to make it to her class on time. Passing students dodge and weave around her as she cruises down the hallway when suddenly she turns a corner and comes face to face with one of the new line of auto maintenance bots on its way to a support call. The autonomous vehicle leans to the left to go around her, and in a typical human to human avoidance scenario, Maggie also goes to her left. The bot, sensing the collision danger quickly changes direction with a quick lean to the right. Maggie intuitively 'knowing' that bot has altered it's course due to her presence and direction is able to stay her course and make her way around the bot and safely to her class on time.

> Solution >>