Phonology
Phonetics and phonology are the sciences that study human speech sounds and sound patterns. At Maryland we take this to mean building mental models for speech sounds and sound patterns.
Phonetics is traditionally divided into three areas: articulatory phonetics (how speech sounds are produced by the tongue and mouth), acoustic phonetics (the physical properties of the resulting sound waves) and auditory phonetics (how speech sounds are processed and perceived by the ear and brain). Our main research emphasis is on speech perception which connects with the strong auditory neuroscience community at the Maryland Center for the Comparative and Evolutionary Biology of Hearing. Phonology adds to these areas by also studying the representation of sounds and words in long-term memory.
Phonological studies at Maryland have strong connections to the other areas of the department. We study how people (and machines) can perceive speech and recognize words using computational, psycholinguistic and neurolinguistic methods. We are fortunate to have our own magneto-encephalographic (MEG) system, which allows us to record ongoing brain activity to try to discover the code that the brain uses to represent speech. Interestingly, the brain seems to use both location in the auditory cortex and timing patterns to represent various properties of speech sounds. We also study how children (and machines) can learn the sounds and sound patterns of their native languages. Many of these studies are done in conjunction with the Infant Language Lab. In these studies young children (some as young as two months) match spoken words with faces or detect changes in sound patterns. We also employ computational models of sound pattern learning and word learning, especially non-parametric Bayesian methods to discover speech sound categories, such as vowel systems.
We make a strong attempt at Maryland to "close the circle" on phonological problems. That is, we seek models of speech sounds and patterns that can be rigorously formulated computationally and that make specific predictions which can be tested with psycholinguistic and neurolinguistic methods. Researchers are encouraged to work together to solve these problems from problem statement to algorithm to brain implementation.
Primary Faculty
Naomi Feldman
Professor, Linguistics
Member, Maryland Language Science Center
Professor, Institute for Advanced Computer Studies
1413 A Marie Mount Hall
College Park
MD,
20742
William Idsardi
Professor, Linguistics
Member, Maryland Language Science Center
Program in Neuroscience and Cognitive Science
CLaME: Max Planck • NYU Center for Language Music and Emotion
1401 A Marie Mount Hall
College Park
MD,
20742
Kate Mooney
Assistant Professor, Linguistics
Member, Maryland Language Science Center
Marie Mount Hall
College Park
MD,
20742
Phonology Events
Phonology Activities
Language Discrimination May Not Rely on Rhythm: A Computational Study
Challenging the relationship between rhythm and language discrimination in infancy.
It has long been assumed that infants’ ability to discriminate between languages stems from their sensitivity to speech rhythm, i.e., organized temporal structure of vowels and consonants in a language. However, the relationship between speech rhythm and language discrimination has not been directly demonstrated. Here, we use computational modeling and train models of speech perception with and without access to information about rhythm. We test these models on language discrimination, and find that access to rhythm does not affect the success of the model in replicating infant language discrimination results. Our findings challenge the relationship between rhythm and language discrimination,
Read More about Language Discrimination May Not Rely on Rhythm: A Computational Study
On substance and Substance-Free Phonology: Where we are at and where we are going
On the abstractness of phonology.
In this introduction [to this special issue of the journal, on substance-free phonology], I will briefly trace the development of features in phonological theory, with particular emphasis on their relationship to phonetic substance. I will show that substance-free phonology is, in some respects, the resurrection of a concept that was fundamental to early structuralist views of features as symbolic markers, whose phonological role eclipses any superficial correlates to articulatory or acoustic objects. In the process, I will highlight some of the principal questions that this epistemological tack raises, and how the articles in this volume contribute to our understanding of those questions
Read More about On substance and Substance-Free Phonology: Where we are at and where we are going
Underspecification in time
Abstracting away from linear order in phonology.
Substance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.