Please see the participate page for sign-up details of active studies if you or your child are interested in being part of our research.
Learning Complex Phonology
This series of studies examines the learning of complex sound patterns by adults and children. Specifically, we ask whether and how individual phonological patterns can be combined, and how that combination affects listeners’ ability to access lexical information
- Artificial Language Learning (LexPo 21)
- LexPo 21 stands for Learning Complex Phonology and the study was launched in 2021. This experiment uses an artificial language learning paradigm to determine how adult learners combine phonological patterns. Listeners learn words from one of several made-up languages, each of which contains a set of properties of experimental interest (e.g., two independent patterns that do combine, or two independent patterns that combine in a variety of possible ways.) Listeners will be trained on these made-up words, and then they will be tested to determine which patterns they learned and how they combined those patterns.
- Child Acquisition Experiment
- This experiment uses a perceptual task to track children’s acquisition of complex phonological processes. Canadian English contains a famous example of interacting phonological processes in words like writing and riding (where both the vowel and the /t/ and /d/ are altered by the language’s phonology). Children between the age of 3 and 9 (and adult control group) will perform a perceptual task where they will be asked to match base forms with corresponding derived forms through yes/no questions. These results will show us how children understand phonological patterns and how they generalize them to new words.
- Eye-tracking Experiment
- Due to COVID-19 complications this portion of the study is on hold until further notice. This experiment uses eye-tracking and artificial language learning to determine how adult listeners recognize words in which multiple patterns have been combined in various ways. Participants will first be trained on a number of words in a made-up language, and then they will be tested on those words. During the testing, we will record their eye movements, which can be analyzed to determine which interpretation of the word each participant was considering over time.
Online Investigations of Word Structure
The Online Investigations of Word Structure series involves two online studies: Phonology of Non-Words (PhoN) and Name a Picture. PhoN has finished running participants and is currently undergoing analysis.
- Phonology of Non-Words: PhoN
- In the PhoN study, we investigated questions such as, “What lexical representations do speakers come up with, given a nonsense word?" Specifically, we were interested in how participants perceived flaps (as either /t/ or /d/) in non words.
- Name a Picture (NaP)
- The NaP study is the counterpart to PhoN. As the title suggests, NaP investigates questions such as "What meanings to speakers associate with a given picture?" We hope to establish a strong database of pictures with NaP data.
Phonology of Real and Fictional Names (FaNG)
All speakers have intuitions about what makes a word sound like a word in their language, or like a word in any human language. This project examines the sound-based factors that lead speakers to intuit that a place name is "human" or "alien," with the intent of determining the factors that contribute to words sounding "non-human."
Spoken word recognition in L2 learners of English
Do basic principles of word recognition hold within a second language (L2)? How might these principles change with more exposure to the L2 and an increase in vocabulary size? This study uses eye-tracking technology to answer questions about L2 learning that can inform strategies for improving recognition in language learners. Currently on hold due to COVID-19.
How do second language learners of English use acoustic cues to assist in their comprehension of spoken English? Comparing the findings of native speakers of English, Mandarin and French using eye-tracking we will be able to measure the usefulness of these cues in real time.
When speakers are listening to language how do they determine what they are hearing? The ability to process language is rooted in how it is represented in our minds. A series of experiments using eye-tracking technology allows us to consider this question. By determining what competitors are considered when hearing words, we are able to provide insight into how language is represented in the mind. This project will also provide a point of comparison to consider how these representations develop from childhood to adulthood when language is fully acquired.
Learning to Listen Flexibly
Once children have acquired the basics of their first language and its sound system, how do they learn to be forgiving of slightly-deviant pronunciations, e.g. the ones they hear when listening to an adult with an accent? This project is comparing how children ages 5-8 perceive clusters of consonants in English, and how flexible they are in their perception. We are interested in children whose only language is English, AND those who are learning English as a second language.
This study was conducted in collaboration with Dr. Anne-Michelle Tessier and Dr. Claire Moore-Cantwell.
The Phonological Processing Lab ran this experiment with children between the ages of 5 and 10. This experiment used picture naming and eye-tracking technology to answer questions related to how children represent words in their minds. We were interested in whether their degree of similarity to other words would afect how children recognized words like "sun", "run", and "shoe."
HandShark is an online study which investigates the markedness of handshapes in ASL. The researchers are interested in knowing what makes visual languages harder or easier to learn. This study is currently being hosted online.