Beverly Hannah | Research | Simon Fraser University

The stuff I work on in the Language and Brain Lab @ SFU

Conference Presentations & Publications

Tang, L., Hannah, B., Jongman, A., Sereno, J., Wang, Y., Hamarneh, G. (2015). Examining visible articulatory features in clear and conversational speech. Paper accepted for poster presentation at ICPhS 2015, August 10-14, Glasgow, Scotland, UK.

Kawase, S., Hannah, B., & Wang, Y. (2014). The influence of visual speech information on the intelligibility of English consonants produced by non-native speakers. The Journal of the Acoustical Society of America, 136(3), 1352-1362.

Eng, K., Hannah, B., Leung, K., & Wang, Y. (2014). Effects of auditory, visual and gestural input on the perceptual learning of tones. Proceedings of The 7th International Conference on Speech Prosody, pp. 905-909.

Eng, K., Hannah, B., Leong, L., & Wang, Y. (2013). Gesturing pitch: Can co-speech hand gestures facilitate learning of non-native speech sounds? Proceedings of Meetings on Acoustics, pp. 1-4.

Eng, K., Hannah, B., Leong, L., & Wang, Y. (2013). Can co-speech hand gestures facilitate learning of non-native tones? (Poster presentation). 21st International Congress on Acoustics, 165th Meeting of the Acoustical Society of America, 52nd Meeting of the Canadian Acoustical Association.

Kawase, S., Hannah, B., & Wang, Y. (2012). Effects of visual speech information on native listener judgments of L2 speech intelligibility and accent (Oral presentation by Kawase). Pronunciation in Second Language Learning and Teaching Conference, Vancouver, Canada.

Research projects

Can Co-speech Hand Gestures Facilitate the Learning of Non-native Speech Sounds?

Principal Investigators: Katelyn Eng, Lindsay Leong, Beverly Hannah, and Yue Wang

This is an ongoing project in the Language and Brain Lab, started in the summer of 2012. My main contributions are to the experimental design, recording procedures, audio-visual-gestural stimuli recording and editing, perception experiment creation, and data collection portions of this project.

Results to be presented at Speech Prosody 7 Conference, May 2014, Dublin. Preliminary results presented at the 21st International Congress on Acoustics 2013 in Montreal on June 6, 2013, in session 4pSCb, Production and Perception I: Beyond the Speech Segment (Poster Session)

A manuscript is currently in preparation for publication.

Speech perception research has indicated that information from multiple input modalities (e.g. auditory, visual) facilitates second language (L2) speech learning. However, co-speech gestural information has shown mixed results. While L2 learners may benefit from this additional channel of information, it may also be inhibitory as learners may experience excessive cognitive load. This study examines the role of metaphoric hand gestures in L2 lexical tone learning using previously established laboratory training procedures. Training stimuli include Mandarin tones produced by native Mandarin speakers, with concurrent hand gestures mimicking pitch contours in space. Native Canadian English speakers are trained to perceive tones presented in one of three modalities: audio-visual (AV, speaker voice and face), audio-gesture (AG, speaker voice and hand gestures) and audio-visual-gesture (AVG). The effects of training are assessed by comparing the pre-training and post-training tone identification results. Greater improvements for the AVG compared to AV group would indicate the facilitative role of gestures. However, greater improvements for the AG or AV compared to AVG group would support the cognitive overload account. Findings are discussed in terms of how sensory-motor and cognitive domains cooperate functionally in speech perception and learning. [Equal contributions by KE, BH, and YW; work supported by SSHRC]

Articulatory Features in Clear and Conversational Speech

Principal Investigators: Yue Wang (SFU LABlab), Allard Jongman (KUPPL), Joan Sereno (KUPPL), Ghassan Hamarneh (SFU MIAL)

This ongoing project started in the summer of 2013 as a collaboration between the SFU LABlab, the SFU Medical Image Analysis Lab, and the University of Kansas Phonetics and Psycholinguistics Laboratory. We are tracking the differences in the motion of visible articulators during conversational versus clear speech. I am leading the LABlab group during the Fall 2013 semester in refining audio-visual recording procedures and in the collecting audio-visual data from Native English and Native Mandarin speakers.

A manuscript is currently in preparation for publication.

The Effects of Visual Information on Perceiving Accented Speech

Principal Investigators: Saya Kawase and Yue Wang

This study concluded in Summer 2012, and was Saya Kawase's MA Thesis project. I assisted this project in developing audio-visual recording procedures, recording and editing stimuli, collecting perception experiment data, preparing data for statistical analysis, and editing and proofreading presentations and manuscripts.

A portion of this study was presented at the Pronunciation in Second Language Learning and Teaching Conference, August 25, 2012, Vancouver BC as Kawase, Hannah & Wang (2012): Effects of visual speech information on native listener judgments of L2 speech intelligibility.

This study is to examine how visual phonetic information in nonnative speech productions affects native listeners’ perception of foreign accent. Native English listeners are asked to judge stimuli spoken by non-native Japanese speakers in an accent rating task. The Japanese speakers are also matched with a group of native (e.g., English) controls. Given that native listeners perceive errors of L2 production both visually and auditorily, audiovisual stimuli are expected to be perceived as having a stronger foreign accent, especially for the more visually salient ones.

The Processing and Learning of Pitch in Speech

Principal investigator: Yue Wang

The Pitch-EEG portion of this study concluded in 2011. I assisted this project in collecting EEG data from native English and native Mandarin speaking participants using the Electrical Geodesics Inc. 128 channel Hydrocel Geodesic Sensor Net system.