Computing Science

People commanding robots not far-fetched

April 06, 2011
Print

Simon Fraser University researchers are paving the way for humans to direct robots without pushing or prodding them. SFU computing science student Brian Milligan won an award recently for his video demonstrating an uninstrumented person selecting and commanding a group of robots by looking at them and using hand gestures.

The small robots are attached to laptops with webcams that can recognize people. Milligan’s work allows robots to be commanded by someone making a circle gesture with their hands and then pointing to a specific area. There are several practical applications for this technology, explains the master’s student who is currently doing an internship with BigPark, which is part of Microsoft Game Studios.

“Robots might use a system like this to be sent out to clean areas, deliver supplies, perform search and rescue, or for something more fun like a live version of the game StarCraft,” he says. “This technology might also be used in an intelligent home, where you might select areas to be activated and controlled. Computer vision and robots are becoming increasing powerful technologies. One of the key challenges now is to make the technology easily usable and useful for people.”

The best video prize was given by the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) at the Human/Robot Interaction Conference held in Switzerland last month.

This is the first example of its kind, says SFU computing science associate professor Richard Vaughan, one of two advisers, along with associate professor Greg Mori, who assisted Milligan. The project is an extension of the work originally started last year by former SFU student Alex Couture-Beil, in which a single robot was commanded by face-contact and gestures.

“We wanted to extend the previous work to be able to select multiple robots at once and command them using a ‘dietic’ reference, as in the robots go to a location the user points to, rather than a location mapped to an arbitrary gesture,” says Milligan.

These results are an early outcome of a new collaboration between SFU’s Vision and Media Lab and the Autonomy Lab. The two labs are using computer vision to analyze human actions to enable robots and humans to work together effectively.

Just like Milligan did with Couture-Bell’s work, future SFU computing science students will pick up where they left off and move the project forward.

-- 30 --

Contact:
Brian Milligan, SFU computing science, 778.321.4353, brian_milligan@sfu.ca
Greg Mori, SFU computing science, 778.782.7111, mori@cs.sfu.ca
Richard Vaughan, SFU computing science, 778.782.5811, vaughan@cs.sfu.ca
Dixon Tam, SFU PAMR, 778.782.8742, dixont@sfu.ca

(Note: Brian Milligan is a Burnaby resident.)

Story credit/SFU Public Affairs and Media Relations