Skip to main content
Skip to main content

Phonology

Phonetics and phonology are the sciences that study human speech sounds and sound patterns. At Maryland we take this to mean building mental models for speech sounds and sound patterns.

Phonetics is traditionally divided into three areas: articulatory phonetics (how speech sounds are produced by the tongue and mouth), acoustic phonetics (the physical properties of the resulting sound waves) and auditory phonetics (how speech sounds are processed and perceived by the ear and brain). Our main research emphasis is on speech perception which connects with the strong auditory neuroscience community at the Maryland Center for the Comparative and Evolutionary Biology of Hearing. Phonology adds to these areas by also studying the representation of sounds and words in long-term memory.

Phonological studies at Maryland have strong connections to the other areas of the department. We study how people (and machines) can perceive speech and recognize words using computational, psycholinguistic and neurolinguistic methods. We are fortunate to have our own magneto-encephalographic (MEG) system, which allows us to record ongoing brain activity to try to discover the code that the brain uses to represent speech. Interestingly, the brain seems to use both location in the auditory cortex and timing patterns to represent various properties of speech sounds. We also study how children (and machines) can learn the sounds and sound patterns of their native languages. Many of these studies are done in conjunction with the Infant Language Lab. In these studies young children (some as young as two months) match spoken words with faces or detect changes in sound patterns. We also employ computational models of sound pattern learning and word learning, especially non-parametric Bayesian methods to discover speech sound categories, such as vowel systems.
 
We make a strong attempt at Maryland to "close the circle" on phonological problems. That is, we seek models of speech sounds and patterns that can be rigorously formulated computationally and that make specific predictions which can be tested with psycholinguistic and neurolinguistic methods. Researchers are encouraged to work together to solve these problems from problem statement to algorithm to brain implementation.
 

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376
Sorry, no events currently present.
View All Events

There is a simplicity bias when generalising from ambiguous data

How do phonological learners choose among generalizations of differing complexity?

Linguistics

Contributor(s): Adam Liter
Non-ARHU Contributor(s): Karthik Durvasula
Dates:

How exactly do learners generalize in the face of ambiguous data? While there has been a substantial amount of research studying the biases that learners employ, there has been very little work on what sorts of biases are employed in the face of data that is ambiguous between phonological generalizations with different degrees of complexity. In this article, we present the results from three artificial language learning experiments that suggest that, at least for phonotactic sequence patterns, learners are able to keep track of multiple generalizations related to the same segmental co-occurrences; however, the generalizations they learn are only the simplest ones consistent with the data.

Read More about There is a simplicity bias when generalising from ambiguous data

Advanced second language learners' perception of lexical tone contrasts

Mandarin tones are difficult for advanced L2 learners. But the difficulty comes primarily from the need to process tones lexically, and not from an inability to perceive tones phonetically.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Eric Pelzl, Taomei Guo, Robert DeKeyser
Dates:
It is commonly believed that second language (L2) acquisition of lexical tones presents a major challenge for learners from nontonal language backgrounds. This belief is somewhat at odds with research that consistently shows beginning learners making quick gains through focused tone training, as well as research showing advanced learners achieving near-native performance in tone identification tasks. However, other long-term difficulties related to L2 tone perception may persist, given the additional demands of word recognition and the effects of context. In the current study, we used behavioral and event-related potential (ERP) experiments to test whether perception of Mandarin tones is difficult for advanced L2 learners in isolated syllables, disyllabic words in isolation, and disyllabic words in sentences. Stimuli were more naturalistic and challenging than in previous research. While L2 learners excelled at tone identification in isolated syllables, they performed with very low accuracy in rejecting disyllabic tonal nonwords in isolation and in sentences. We also report ERP data from critical mismatching words in sentences; while L2 listeners showed no significant differences in responses in any condition, trends were not inconsistent with the overall pattern in behavioral results of less sensitivity to tone mismatches than to semantic or segmental mismatches. We interpret these results as evidence that Mandarin tones are in fact difficult for advanced L2 learners. However, the difficulty is not due primarily to an inability to perceive tones phonetically, but instead is driven by the need to process tones lexically, especially in multisyllable words.

Read More about Advanced second language learners' perception of lexical tone contrasts

Computational phonology today

Bill Idsardi and Jeff Heinz highlight important aspects of today's computational phonology.

Linguistics

Contributor(s): William Idsardi
Dates:
Broadly speaking, computational phonology encompasses a variety of techniques and goals (see Daland 2014 for a survey). In this introduction we would like to highlight three aspects of current work in computationalphonology: data science and model comparison, modelling phonological phenomena using computational simulations, and characterising the computational nature of phonological patterning with theorems and proofs. Papers in this thematic issue illustrate all three of these trends, and sometimes more than one of them. The way we group them in this introduction is meant to highlight the similarities between them, and not to diminish the importance of their other contributions. As we discuss these areas, we also highlight important conceptual issues that we believe are often overlooked.

Read More about Computational phonology today