Skip to main content
Skip to main content

Naomi wins NSF Grant

October 03, 2021 Linguistics

Professor Naomi Feldman, seated in three-quarter profile, looking leftwards towards an unseen partner in conversation, and laughing with focussed eyes

Support for computational modeling of phonetic learning.

Congratulations to Naomi Feldman, who has won NSF support for her project on "Computational Models of Plasticity and Learning in Speech Perception." Naomi's project uses probabilistic cue weighting models, as well as reinforcement learning models, to investigate how we adapt our speech perception, in infancy or also in adulthood, to particular languages and environments. The full abstract is below.

When it comes to speech perception, listeners are lifelong learners. Although infants’ perception becomes tuned to their native language in their first year of life, their speech sound categories continue to change well into childhood and adolescence. Adults also continue to show substantial capacity for perceptual learning, particularly in settings that involve feedback or rewards. This project uses computational modeling to investigate the learning mechanisms that allow listeners to adapt their speech perception to particular languages and environments. By building theories of auditory perceptual learning, the project will contribute to our understanding of the difficulties that adults face when learning another language. It could also provide a framework for understanding the difficulties faced by certain populations, such as children with cochlear implants, when learning their first language and may facilitate future development of treatments or interventions for these populations.

Two types of computational models are developed based on adult perceptual learning data: probabilistic cue weighting models, which are designed to capture fast, trial-by-trial changes in listeners’ reliance on different parts of the speech signal, and reinforcement learning models, which are designed to capture longer term, implicit perceptual learning of speech sounds that occurs in response to a reward, such as points in a video game. The models are tested on their ability to capture adults’ perceptual learning behavior in experimental settings. A second series of simulations then explores whether and how these models that were developed on adult data can predict aspects of children’s perceptual learning of speech sound categories in laboratory discrimination tasks that involve rewards, such as exciting toys, and in naturalistic settings where the speech is more complex and there is a less obvious reward structure. Results from the project are expected to provide insight into what types of speech representations children and adults have at different stages of development, as well as which perceptual learning strategies learners rely on at different ages and in different learning environments.