Skip to main content
Skip to main content

Phonology

Phonetics and phonology are the sciences that study human speech sounds and sound patterns. At Maryland we take this to mean building mental models for speech sounds and sound patterns.

Phonetics is traditionally divided into three areas: articulatory phonetics (how speech sounds are produced by the tongue and mouth), acoustic phonetics (the physical properties of the resulting sound waves) and auditory phonetics (how speech sounds are processed and perceived by the ear and brain). Our main research emphasis is on speech perception which connects with the strong auditory neuroscience community at the Maryland Center for the Comparative and Evolutionary Biology of Hearing. Phonology adds to these areas by also studying the representation of sounds and words in long-term memory.

Phonological studies at Maryland have strong connections to the other areas of the department. We study how people (and machines) can perceive speech and recognize words using computational, psycholinguistic and neurolinguistic methods. We are fortunate to have our own magneto-encephalographic (MEG) system, which allows us to record ongoing brain activity to try to discover the code that the brain uses to represent speech. Interestingly, the brain seems to use both location in the auditory cortex and timing patterns to represent various properties of speech sounds. We also study how children (and machines) can learn the sounds and sound patterns of their native languages. Many of these studies are done in conjunction with the Infant Language Lab. In these studies young children (some as young as two months) match spoken words with faces or detect changes in sound patterns. We also employ computational models of sound pattern learning and word learning, especially non-parametric Bayesian methods to discover speech sound categories, such as vowel systems.
 
We make a strong attempt at Maryland to "close the circle" on phonological problems. That is, we seek models of speech sounds and patterns that can be rigorously formulated computationally and that make specific predictions which can be tested with psycholinguistic and neurolinguistic methods. Researchers are encouraged to work together to solve these problems from problem statement to algorithm to brain implementation.
 

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376

On substance and Substance-Free Phonology: Where we are at and where we are going

On the abstractness of phonology.

Linguistics

Contributor(s): Alex Chabot
Dates:

In this introduction [to this special issue of the journal, on substance-free phonology], I will briefly trace the development of features in phonological theory, with particular emphasis on their relationship to phonetic substance. I will show that substance-free phonology is, in some respects, the resurrection of a concept that was fundamental to early structuralist views of features as symbolic markers, whose phonological role eclipses any superficial correlates to articulatory or acoustic objects. In the process, I will highlight some of the principal questions that this epistemological tack raises, and how the articles in this volume contribute to our understanding of those questions

Read More about On substance and Substance-Free Phonology: Where we are at and where we are going

Underspecification in time

Abstracting away from linear order in phonology.

Linguistics

Contributor(s): William Idsardi
Dates:

Substance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.

Read More about Underspecification in time

Naturalistic speech supports distributional learning across contexts

Infants can learn what acoustic dimensions contrastive by attending to phonetic context.

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s): Kasia Hitczenko *19
Dates:

At birth, infants discriminate most of the sounds of the world’s languages, but by age 1, infants become language-specific listeners. This has generally been taken as evidence that infants have learned which acoustic dimensions are contrastive, or useful for distinguishing among the sounds of their language(s), and have begun focusing primarily on those dimensions when perceiving speech. However, speech is highly variable, with different sounds overlapping substantially in their acoustics, and after decades of research, we still do not know what aspects of the speech signal allow infants to differentiate contrastive from noncontrastive dimensions. Here we show that infants could learn which acoustic dimensions of their language are contrastive, despite the high acoustic variability. Our account is based on the cross-linguistic fact that even sounds that overlap in their acoustics differ in the contexts they occur in. We predict that this should leave a signal that infants can pick up on and show that acoustic distributions indeed vary more by context along contrastive dimensions compared with noncontrastive dimensions. By establishing this difference, we provide a potential answer to how infants learn about sound contrasts, a question whose answer in natural learning environments has remained elusive.

Read More about Naturalistic speech supports distributional learning across contexts