Skip to main content
Skip to main content

Phonology

Phonetics and phonology are the sciences that study human speech sounds and sound patterns. At Maryland we take this to mean building mental models for speech sounds and sound patterns.

Phonetics is traditionally divided into three areas: articulatory phonetics (how speech sounds are produced by the tongue and mouth), acoustic phonetics (the physical properties of the resulting sound waves) and auditory phonetics (how speech sounds are processed and perceived by the ear and brain). Our main research emphasis is on speech perception which connects with the strong auditory neuroscience community at the Maryland Center for the Comparative and Evolutionary Biology of Hearing. Phonology adds to these areas by also studying the representation of sounds and words in long-term memory.

Phonological studies at Maryland have strong connections to the other areas of the department. We study how people (and machines) can perceive speech and recognize words using computational, psycholinguistic and neurolinguistic methods. We are fortunate to have our own magneto-encephalographic (MEG) system, which allows us to record ongoing brain activity to try to discover the code that the brain uses to represent speech. Interestingly, the brain seems to use both location in the auditory cortex and timing patterns to represent various properties of speech sounds. We also study how children (and machines) can learn the sounds and sound patterns of their native languages. Many of these studies are done in conjunction with the Infant Language Lab. In these studies young children (some as young as two months) match spoken words with faces or detect changes in sound patterns. We also employ computational models of sound pattern learning and word learning, especially non-parametric Bayesian methods to discover speech sound categories, such as vowel systems.
 
We make a strong attempt at Maryland to "close the circle" on phonological problems. That is, we seek models of speech sounds and patterns that can be rigorously formulated computationally and that make specific predictions which can be tested with psycholinguistic and neurolinguistic methods. Researchers are encouraged to work together to solve these problems from problem statement to algorithm to brain implementation.
 

Sorry, no events currently present.
View All Events

A Comprehensive Three-dimensional Cortical Map of Vowel Space

Postdoc Mathias Scharinger and collaborators use the magnetic N1 (M100) to map the entire vowel space of Turkish onto cortical locations in the brain. They find two distinct tonotopic maps, one for front vowels and one for back.

Linguistics

Contributor(s): William Idsardi
Dates:
Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral–medial, anterior–posterior, and inferior–superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom–up information but crucially involves featural–phonetic top–down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

A role for the developing lexicon in phonetic category acquisition

Bayesian models and artificial language learning tasks show that infant acquiosition of phonetic categories can be helpfully constrained by feedback from word segmentation.

Linguistics

Contributor(s): Naomi Feldman
Dates:
Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.

A single stage approach to learning phonological categories: Insights from Inuktitut

Much research presumes that we acquire phonetic categories before abstracting phonological categories. Ewan Dunbar argues that this two-step progression is unnecessary, with a Bayesian model for the acquisition of Inuktitut vowels.

Linguistics

Contributor(s): William Idsardi
Dates:
We argue that there is an implicit view in psycholinguistics that phonological acquisition is a 'two-stage' process: phonetic categories are first acquired, and then subsequently mapped onto abstract phoneme categories. We present simulations that suggest two problems with this view: first, the learner might mistake the phoneme-level categories for phonetic-level categories and thus be unable to learn the relationships between phonetic-level categories; on the other hand, the learner might construct inaccurate phonetic-level representations that prevent it from finding regular relations among them. We suggest an alternative conception of the phonological acquisition problem that sidesteps this apparent inevitability, and present a Bayesian model that acquires phonemic categories in a single stage. Using acoustic data from Inuktitut, we show that this model reliably converges on a set of phoneme-level categories and phonetic-level relations among subcategories, without making use of a lexicon.

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376