Skip to main content
Skip to main content


Phonetics and phonology are the sciences that study human speech sounds and sound patterns. At Maryland we take this to mean building mental models for speech sounds and sound patterns.

Phonetics is traditionally divided into three areas: articulatory phonetics (how speech sounds are produced by the tongue and mouth), acoustic phonetics (the physical properties of the resulting sound waves) and auditory phonetics (how speech sounds are processed and perceived by the ear and brain). Our main research emphasis is on speech perception which connects with the strong auditory neuroscience community at the Maryland Center for the Comparative and Evolutionary Biology of Hearing. Phonology adds to these areas by also studying the representation of sounds and words in long-term memory.

Phonological studies at Maryland have strong connections to the other areas of the department. We study how people (and machines) can perceive speech and recognize words using computational, psycholinguistic and neurolinguistic methods. We are fortunate to have our own magneto-encephalographic (MEG) system, which allows us to record ongoing brain activity to try to discover the code that the brain uses to represent speech. Interestingly, the brain seems to use both location in the auditory cortex and timing patterns to represent various properties of speech sounds. We also study how children (and machines) can learn the sounds and sound patterns of their native languages. Many of these studies are done in conjunction with the Infant Language Lab. In these studies young children (some as young as two months) match spoken words with faces or detect changes in sound patterns. We also employ computational models of sound pattern learning and word learning, especially non-parametric Bayesian methods to discover speech sound categories, such as vowel systems.
We make a strong attempt at Maryland to "close the circle" on phonological problems. That is, we seek models of speech sounds and patterns that can be rigorously formulated computationally and that make specific predictions which can be tested with psycholinguistic and neurolinguistic methods. Researchers are encouraged to work together to solve these problems from problem statement to algorithm to brain implementation.

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376

Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

Auditory pattern recognition in nonhuman animals shares important characteristics with human phonology, but not human syntax.


Contributor(s): William Idsardi
Non-ARHU Contributor(s): Jonathan Rawsi (Stony Brook), Jeffrey Heinz (Stony Brook)

We comment on the technical interpretation of the study of Watson et al. and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals’ ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.

Read More about Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

There is a simplicity bias when generalising from ambiguous data

How do phonological learners choose among generalizations of differing complexity?


Contributor(s): Adam Liter
Non-ARHU Contributor(s): Karthik Durvasula

How exactly do learners generalize in the face of ambiguous data? While there has been a substantial amount of research studying the biases that learners employ, there has been very little work on what sorts of biases are employed in the face of data that is ambiguous between phonological generalizations with different degrees of complexity. In this article, we present the results from three artificial language learning experiments that suggest that, at least for phonotactic sequence patterns, learners are able to keep track of multiple generalizations related to the same segmental co-occurrences; however, the generalizations they learn are only the simplest ones consistent with the data.

Read More about There is a simplicity bias when generalising from ambiguous data

Advanced second language learners' perception of lexical tone contrasts

Mandarin tones are difficult for advanced L2 learners. But the difficulty comes primarily from the need to process tones lexically, and not from an inability to perceive tones phonetically.


Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Eric Pelzl, Taomei Guo, Robert DeKeyser
It is commonly believed that second language (L2) acquisition of lexical tones presents a major challenge for learners from nontonal language backgrounds. This belief is somewhat at odds with research that consistently shows beginning learners making quick gains through focused tone training, as well as research showing advanced learners achieving near-native performance in tone identification tasks. However, other long-term difficulties related to L2 tone perception may persist, given the additional demands of word recognition and the effects of context. In the current study, we used behavioral and event-related potential (ERP) experiments to test whether perception of Mandarin tones is difficult for advanced L2 learners in isolated syllables, disyllabic words in isolation, and disyllabic words in sentences. Stimuli were more naturalistic and challenging than in previous research. While L2 learners excelled at tone identification in isolated syllables, they performed with very low accuracy in rejecting disyllabic tonal nonwords in isolation and in sentences. We also report ERP data from critical mismatching words in sentences; while L2 listeners showed no significant differences in responses in any condition, trends were not inconsistent with the overall pattern in behavioral results of less sensitivity to tone mismatches than to semantic or segmental mismatches. We interpret these results as evidence that Mandarin tones are in fact difficult for advanced L2 learners. However, the difficulty is not due primarily to an inability to perceive tones phonetically, but instead is driven by the need to process tones lexically, especially in multisyllable words.

Read More about Advanced second language learners' perception of lexical tone contrasts