Skip to main content
Skip to main content

Computational Linguistics

At Maryland we use computation in two ways: to build formal models of language structure, processing, and learning, and also to build technologies that make use of human languages. 

Computational linguistics at Maryland has two aspects. The first, known as "computational psycholinguistics,” uses computational models to better understand how people understand, generate and learn language and to characterize the human language capacity as a formal computational system. Researchers at Maryland have particular interests in using models to investigate problems in phonetics and phonology, psycholinguistics and language acquisition.

Computational linguistics also has a practical side, sometimes referred to as "natural language processing" or "human language technology.” Here the goal is to make computers smarter about human language, improving the automated analysis and generation of text, with results that can interact effectively with other information systems.

These two strands of computational linguistics are connected by shared methods (such as Bayesian models), a shared concern with grounding theories in naturally occurring linguistic data and a shared view of language as a fundamentally computational system for which formally explicit models and theories can be specified, designed and tested.
 
Our department has close ties to the Computational Linguistics and Information Processing Laboratory (CLIP Lab) at UMD's Institute for Advanced Computer Studies, where colleagues from linguistics, computer science and the College of Information Studies (iSchool) work together to advance the state of the art in such areas as machine translation, automatic summarization, information retrieval, question answering and computational social science.

A Formal Model of Ambiguity and its Applications in Machine Translation

A model of language processing for machine translation that copes with ambiguity by trafficking in weighted sets of multiple inputs and outputs, and choosing a single analysis only as a last resort.

Linguistics

Dates:
Systems that process natural language must cope with and resolve ambiguity. In this dissertation, a model of language processing is advocated in which multiple inputs and multiple analyses of inputs are considered concurrently and a single analysis is only a last resort. Compared to conventional models, this approach can be understood as replacing single-element inputs and outputs with weighted sets of inputs and outputs. Although processing components must deal with sets (rather than individual elements), constraints are imposed on the elements of these sets, and the representations from existing models may be reused. However, to deal efficiently with large (or infinite) sets, compact representations of sets that share structure between elements, such as weighted finite-state transducers and synchronous context-free grammars, are necessary. These representations and algorithms for manipulating them are discussed in depth in depth. To establish the effectiveness and tractability of the proposed processing model, it is applied to several problems in machine translation. Starting with spoken language translation, it is shown that translating a set of transcription hypotheses yields better translations compared to a baseline in which a single (1-best) transcription hypothesis is selected and then translated, independent of the translation model formalism used. More subtle forms of ambiguity that arise even in text-only translation (such as decisions conventionally made during system development about how to preprocess text) are then discussed, and it is shown that the ambiguity-preserving paradigm can be employed in these cases as well, again leading to improved translation quality. A model for supervised learning that learns from training data where sets (rather than single elements) of correct labels are provided for each training instance and use it to learn a model of compound word segmentation is also introduced, which is used as a preprocessing step in machine translation.

A role for the developing lexicon in phonetic category acquisition

Bayesian models and artificial language learning tasks show that infant acquiosition of phonetic categories can be helpfully constrained by feedback from word segmentation.

Linguistics

Contributor(s): Naomi Feldman
Dates:
Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.

A unified account of categorical effects in phonetic perception

A statistical model that explains both the strong categorical effects in perception of consonants, and the very weak effects in perception of vowels.

Linguistics

Contributor(s): Naomi Feldman
Dates:
Categorical effects are found across speech sound categories, with the degree of these effects ranging from extremely strong categorical perception in consonants to nearly continuous perception in vowels. We show that both strong and weak categorical effects can be captured by a unified model. We treat speech perception as a statistical inference problem, assuming that listeners use their knowledge of categories as well as the acoustics of the signal to infer the intended productions of the speaker. Simulations show that the model provides close fits to empirical data, unifying past findings of categorical effects in consonants and vowels and capturing differences in the degree of categorical effects through a single parameter.

http://dx.doi.org/ 10.3758/s13423-016-1049-y" target="_blank" class="button">Read More about A unified account of categorical effects in phonetic perception

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

Philip Resnik

Professor, Linguistics

1401 C Marie Mount Hall
College Park MD, 20742

(301) 405-6760

Amy Weinberg

Professor Emerita, Linguistics

Secondary Faculty

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376