Skip to main content
Skip to main content

Phil Monahan / The brain uses phonological features to parse the present and predict the future

Close photo of the snout and eyes of a fluffy white terrier, with a man's face behind.

Phil Monahan / The brain uses phonological features to parse the present and predict the future

Linguistics Friday, April 26, 2024 3:00 pm - 4:30 pm Edward St. John Learning and Teaching Center,

Friday April 26, our colloquium series welcomes another distinguished alum, Philip J. Monahan *09, who will discuss how "The brain uses phonological features to parse the present and predict the future." Phil is Associate Professor of Linguistics at the University of Toronto, aka Terpronto, home also to Ewan Dunbar *13, Dave Kush *13, and recent postdoc Shohini Bhattasali. While at Maryland, in the same graduating class as Ellen, Phil was advised by both Bill and David Poeppel. His dissertation was "On The Way To Linguistic Representation: Neuromagnetic Evidence of Early Auditory Abstraction in the Perception of Speech and Pitch." The abstract for his talk follows.


The nature of speech sound representations remains intensely debated. While generative theories have long postulated abstract phonological features, their psycholinguistic support is sparse and equivocal. In this talk, I present three experiments that aim to understand the neurophysiological representation of phonological categories and classes. First, using magnetoencephalography (MEG) to investigate English mid-vowels, I report asymmetric mismatch negativity (MMN) responses consistent with an underspecified featural account for their place of articulation. Evidence from the time-frequency domain indicates that this featural knowledge helps predict upcoming stimuli. In the second experiment, I present the results of an electroencephalography (EEG) oddball MMN study that employs inter-category variation in the standards; results suggests that Mandarin Chinese listeners’ brains group rhotic consonants as a coherent class, consistent with an account that posits the feature [retroflex]. In the third experiment, using EEG to test English obstruent voicing and inter-category variation in the standards, I present MMN results that suggest that the brain disjunctively codes temporal and spectral phonetic cues into a single abstract phonological category (i.e., [spread glottis]); again, the time-frequency domain reveals that featural knowledge facilitates predicting upcoming stimuli. These results, taken together, suggest that abstract phonological features are not only supported by the brain but also help predict the incoming signal. 

 

Add to Calendar 04/26/24 3:00 PM 04/26/24 4:30 PM America/New_York Phil Monahan / The brain uses phonological features to parse the present and predict the future

Friday April 26, our colloquium series welcomes another distinguished alum, Philip J. Monahan *09, who will discuss how "The brain uses phonological features to parse the present and predict the future." Phil is Associate Professor of Linguistics at the University of Toronto, aka Terpronto, home also to Ewan Dunbar *13, Dave Kush *13, and recent postdoc Shohini Bhattasali. While at Maryland, in the same graduating class as Ellen, Phil was advised by both Bill and David Poeppel. His dissertation was "On The Way To Linguistic Representation: Neuromagnetic Evidence of Early Auditory Abstraction in the Perception of Speech and Pitch." The abstract for his talk follows.


The nature of speech sound representations remains intensely debated. While generative theories have long postulated abstract phonological features, their psycholinguistic support is sparse and equivocal. In this talk, I present three experiments that aim to understand the neurophysiological representation of phonological categories and classes. First, using magnetoencephalography (MEG) to investigate English mid-vowels, I report asymmetric mismatch negativity (MMN) responses consistent with an underspecified featural account for their place of articulation. Evidence from the time-frequency domain indicates that this featural knowledge helps predict upcoming stimuli. In the second experiment, I present the results of an electroencephalography (EEG) oddball MMN study that employs inter-category variation in the standards; results suggests that Mandarin Chinese listeners’ brains group rhotic consonants as a coherent class, consistent with an account that posits the feature [retroflex]. In the third experiment, using EEG to test English obstruent voicing and inter-category variation in the standards, I present MMN results that suggest that the brain disjunctively codes temporal and spectral phonetic cues into a single abstract phonological category (i.e., [spread glottis]); again, the time-frequency domain reveals that featural knowledge facilitates predicting upcoming stimuli. These results, taken together, suggest that abstract phonological features are not only supported by the brain but also help predict the incoming signal. 

 

Edward St. John Learning and Teaching Center