Computing science professor Angelica Lim (left) and PhD student Paige Tuttosi (right) are collaborating with French researchers to use robots to help people learn a second language, with funding from the France Canada Research Fund.

How SFU Researchers are Using Artificial Intelligence to Help You Learn a Second Language

June 02, 2022
Print

By: Andrew Ringer

Learning a second-language may be getting easier soon, thanks to SFU computing science professor Angelica Lim and her collaborators. By using artificial intelligence (AI) and electroencephalogram (EEG) technology, the researchers are working to make robot voices more engaging in order to improve their ability to teach humans a second language.

With funding from the France Canada Research Fund (FCRF), Lim and SFU PhD student Paige Tuttosi are collaborating with researcher Jean-Julien Aucouturier from the FEMTO-ST Institute to look into how English speakers can more effectively learn French, and how French speakers can more effectively learn English. In particular, the researchers are looking into how they can replicate engaging characteristics of a human’s voice in these two languages and apply it to robot teachers’ speaking voices. As part of the collaboration, Tuttosi, who played an influential role in the proposal, will be travelling to France to work with Aucouturier for six months.

“There’s currently plenty of work focusing on computer vision and robotics, but there’s still much more to explore in terms of audio and robotics,” says Lim.

The first step in this research is to understand how people change their voice when speaking in different environments. To do this, Lim and her team created a website where people can submit their voice and how they would speak in different scenarios (talking to a child compared to talking to an adult, for example). People can also select which audio clips of teachers they find most engaging, and the researchers will cross-reference this data to see which speaking techniques are most effective when teaching.

Now collaborating with Aucouturier, an expert in EEG and psychoacoustics, Lim and her team will take this research one step forward. By using EEG technology, the researchers will track electrical activity in the brain while people listen to teachers, to find out exactly what it is about the human voice and how it’s used that people find engaging. For example, when is it effective to pause while speaking, what words should be stressed and more.

“This will allow us to know not just what people think is most engaging in a teacher’s voice, but what actually is most engaging,” says Lim.

By building a multilingual model of an engaging speaking voice, the researchers hope to develop an artificial voice model optimized to help people learn English and French. Their research can also be applied to understand and explain the characteristics of an engaging teaching voice, which can help teachers improve their speech, improve the retention of public health messages, and more.

Eventually, Lim hopes to enable robots to more effectively communicate with humans. While the goal isn’t necessarily to replicate the human voice, the researchers want to take the good qualities of a human voice to improve robot speech.

“Robots should sound and look like robots, but the manner that they speak to you should make you feel comfortable,” says Lim.