Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Memory and Prediction in Cross-Linguistic Sentence Processing

A dissertation from Sol Lago, on the retrieval of memory for syntactic features of prior noun phrases in the processing of anaphora and agreement.

Linguistics

Non-ARHU Contributor(s): Sol Lago
Dates:
This dissertation explores the role of morphological and syntactic variation in sentence comprehension across languages. While most previous research has focused on how cross-linguistic differences affect the control structure of the language architecture (Lewis & Vasishth, 2005) here we adopt an explicit model of memory, content-addressable memory (Lewis & Vasishth, 2005; McElree, 2006) and examine how cross-linguistic variation affects the nature of the representations and processes that speakers deploy during comprehension. With this goal, we focus on two kinds of grammatical dependencies that involve an interaction between language and memory: subject-verb agreement and referential pronouns. In the first part of this dissertation, we use the self-paced reading method to examine how the processing of subject-verb agreement in Spanish, a language with a rich morphological system, differs from English. We show that differences in morphological richness across languages impact prediction processes while leaving retrieval processes fairly preserved. In the second part, we examine the processing of coreference in German, a language that, in contrast with English, encodes gender syntactically. We use eye-tracking to compare comprehension profiles during coreference and we find that only speakers of German show evidence of semantic reactivation of a pronoun’s antecedent. This suggests that retrieval of semantic information is dependent on syntactic gender, and demonstrates that German and English speakers retrieve qualitatively different antecedent representations from memory. Taken together, these results suggest that cross-linguistic variation in comprehension is more affected by the content than the functional importance of gender and number features across languages.

Expanding our Reach and Theirs: When Linguists go to High School

A report on outreach to local schools by the community of language scientists at UMCP.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Yakov Kronrod
Dates:
In 2007, we began an outreach program in Linguistics with psychology students in a local majority–minority high school. In the years since, the initial collaboration has grown to include other schools and nurtured a culture of community engagement in the language sciences at the University of Maryland. The program has led to a number of benefits for both the public school students and the University researchers involved. Over the years, our efforts have developed into a multi-faceted outreach program targeting primary and secondary school as well as the public more broadly. Through our outreach, we attempt to take a modest step toward increasing public awareness and appreciation of the importance of language science, toward the integration of research into the school curriculum, and giving potential first-generation college students a taste of what they are capable of. In this article, we describe in detail our motivations and goals, the details of the activities, and where we can go from here.

Read More about Expanding our Reach and Theirs: When Linguists go to High School

Spatiotemporal signatures of lexical-semantic prediction

Ellen Lau finds evidence that facilitatory effects of lexical–semantic prediction on the electrophysiological response 350–450 ms postonset reflect modulation of activity in left anterior temporal cortex.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Kirsten Weber, Alexandre Gramfort, Matti Hamalainen, Gina Kuperberg
Dates:
Although there is broad agreement that top-down expectations can facilitate lexical–semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350–450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical–semantic prediction on the electrophysiological response 350–450 ms postonset reflect modulation of activity in left anterior temporal cortex.

Read More about Spatiotemporal signatures of lexical-semantic prediction

How Nature Meets Nurture: Universal Grammar and Statistical Learning

Children acquire grammars on the basis of statistical information, interpreted through a system of linguistic representation that is substantially innate. Jeff Lidz and Annie Gagliardi propose a model of the process.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Annie Gagliardi
Dates:
Evidence of children’s sensitivity to statistical features of their input in language acquisition is often used to argue against learning mechanisms driven by innate knowledge. At the same time, evidence of children acquiring knowledge that is richer than the input supports arguments in favor of such mechanisms. This tension can be resolved by separating the inferential and deductive components of the language learning mechanism. Universal Grammar provides representations that support deductions about sentences that fall outside of experience. In addition, these representations define the evidence that learners use to infer a particular grammar. The input is compared with the expected evidence to drive statistical inference. In support of this model, we review evidence of (a) children’s sensitivity to the environment, (b) mismatches between input and intake, (c) the need for learning mechanisms beyond innate representations, and (d) the deductive consequences of children’s acquired syntactic representations.

Read More about How Nature Meets Nurture: Universal Grammar and Statistical Learning

Bayesian Model of Categorical Effects in L1 and L2 Speech Processing

A computational model of categorical effects in both first and second language speech perception.

Linguistics

Non-ARHU Contributor(s): Yakov Kronrod
Dates:
In this dissertation I present a model that captures categorical effects in both first language (L1) and second language (L2) speech perception. In L1 perception, categorical effects range between extremely strong for consonants to nearly continuous perception of vowels. I treat the problem of speech perception as a statistical inference problem and by quantifying categoricity I obtain a unified model of both strong and weak categorical effects. In this optimal inference mechanism, the listener uses their knowledge of categories and the acoustics of the signal to infer the intended productions of the speaker. The model splits up speech variability into meaningful category variance and perceptual noise variance. The ratio of these two variances, which I call Tau, directly correlates with the degree of categorical effects for a given phoneme or continuum. By fitting the model to behavioral data from different phonemes, I show how a single parametric quantitative variation can lead to the different degrees of categorical effects seen in perception experiments with different phonemes. In L2 perception, L1 categories have been shown to exert an effect on how L2 sounds are identified and how well the listener is able to discriminate them. Various models have been developed to relate the state of L1 categories with both the initial and eventual ability to process the L2. These models largely lacked a formalized metric to measure perceptual distance, a means of making a-priori predictions of behavior for a new contrast, and a way of describing non-discrete gradient effects. In the second part of my dissertation, I apply the same computational model that I used to unify L1 categorical effects to examining L2 perception. I show that we can use the model to make the same type of predictions as other SLA models, but also provide a quantitative framework while formalizing all measures of similarity and bias. Further, I show how using this model to consider L2 learners at different stages of development we can track specific parameters of categories as they change over time, giving us a look into the actual process of L2 category development.

The structure-sensitivity of memory access: Evidence from Mandarin Chinese

Interpretation of a reflexive pronoun requires consultation of memory for prior context. What role does the syntax of that context play in guiding that process? Brian Dillon reports a study on Mandarin Chinese.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Brian Dillon, Wing Yee Chow, Matt Wagers, Taomei Guo, Fengqin Liu
Dates:
The present study examined the processing of the Mandarin Chinese long-distance reflexive ziji to evaluate the role that syntactic structure plays in the memory retrieval operations that support sentence comprehension. Using the multiple-response speed-accuracy tradeoff (MR-SAT) paradigm, we measured the speed with which comprehenders retrieve an antecedent for ziji. Our experimental materials contrasted sentences where ziji's antecedent was in the local clause with sentences where ziji's antecedent was in a distant clause. Time course results from MR-SAT suggest that ziji dependencies with syntactically distant antecedents are slower to process than syntactically local dependencies. To aid in interpreting the SAT data, we present a formal model of the antecedent retrieval process, and derive quantitative predictions about the time course of antecedent retrieval. The modeling results support the Local Search hypothesis: during syntactic retrieval, comprehenders initially limit memory search to the local syntactic domain. We argue that Local Search hypothesis has important implications for theories of locality effects in sentence comprehension. In particular, our results suggest that not all locality effects may be reduced to the effects of temporal decay and retrieval interference.

Read More about The structure-sensitivity of memory access: Evidence from Mandarin Chinese

Agreement and its Failures

Omer Preminger investigates how the obligatory nature of predicate-argument agreement is enforced by the grammar.

Linguistics

Contributor(s): Omer Preminger
Dates:
Publisher: MIT Press
In this book, Omer Preminger investigates how the obligatory nature of predicate-argument agreement is enforced by the grammar. Preminger argues that an empirically adequate theory of predicate-argument agreement requires recourse to an operation, whose obligatoriness is a grammatical primitive not reducible to representational properties, but whose successful culmination is not enforced by the grammar. Preminger’s argument counters contemporary approaches that find the obligatoriness of predicate-argument agreement enforced through representational means. The most prominent of these is Chomsky’s “interpretability”-based proposal, in which the obligatoriness of predicate-argument agreement is enforced through derivational time bombs. Preminger presents an empirical argument against contemporary approaches that seek to derive the obligatory nature of predicate-argument agreement exclusively from derivational time bombs. He offers instead an alternative account based on the notion of obligatory operations better suited to the facts. The crucial data involves utterances that inescapably involve attempted-but-failed agreement and are nonetheless fully grammatical. Preminger combines a detailed empirical investigation of agreement phenomena in the Kichean (Mayan) languages, Zulu (Bantu), Basque, Icelandic, and French with an extensive and rigorous theoretical exploration of the far-reaching consequences of these data. The result is a novel proposal that has profound implications for the formalism that the theory of grammar uses to derive obligatory processes and properties.

Measuring Predicates

A unified semantics for comparative constructions, departing from the view that predicates themselves express measure functions, via one morpheme to express measurement: "much".

Linguistics

Non-ARHU Contributor(s): Alexis Wellwood
Dates:
Determining the semantic content of sentences, and uncovering regularities between linguistic form and meaning, requires attending to both morphological and syntactic properties of a language with an eye to the notional categories that the various pieces of form express. In this dissertation, I investigate the morphosyntactic devices that English speakers (and speakers of other languages) can use to talk about comparisons between things: comparative sentences with, in English, more... than, as... as, too, enough, and others. I argue that a core component of all of these constructions is a unitary element expressing the concept of measurement. The theory that I develop departs from the standard degree-theoretic analysis of the semantics of comparatives in three crucial respects: first, gradable adjectives do not (partially or wholly) denote measure functions; second, degrees are introduced compositionally; and three, the introduction of degrees arises uniformly from the semantics of the expression much. These ideas mark a return to the classic mor- phosyntactic analysis of comparatives found in Bresnan (1973), while incorporating and extending semantic insights of Schwarzschild (2002, 2006). Of major interest is how the dimensions for comparison observed across the panoply of comparative constructions vary, and these are analyzed as a consequence of what is measured (individuals, events, states, etc.), rather than which expressions invoke the measurement. This shift in perspective leads to the observation of a number of regularities in the mapping between form and meaning that could not otherwise have been seen. First, the notion of measurement expressed across comparative constructions is familiar from some explications of that concept in measurement theory (e.g. Berka 1983). Second, the distinction between gradable and non-gradable adjectives is formally on a par with that between mass and count nouns, and between atelic and telic verb phrases. Third, comparatives are perceived to be acceptable if the domain for measurement is structured, and to be anamolous otherwise. Finally, elaborations of grammatical form reflexively affect which dimensions for comparison are available to interpretation.

Syntactic Head Movement and its Consequences

Assimilating head movement to phrasal movement.

Linguistics

Non-ARHU Contributor(s): Kenshi Funakoshi
Dates:
This thesis attempts to assimilate head movement as far as possible to phrasal movement. In particular, I argue that if we assume that the computational system of natural languages does not discriminate head movement from phrasal movement in terms of locality and the possible mode of operation, a distributional difference between these two types of movement can be explained by the interaction between a locality constraint and an anti-locality constraint to which syntactic movement operations are subject, and crosslinguistic variations in the possibility of what I will call headless XP-movement and headless XP-ellipsis can be reduced to parameters that are responsible for the possible number of specifiers. For this purpose, this dissertation discusses a number of syntactic phenomena: nominative object constructions in Japanese, long head movement constructions in Slavic and Romance languages, multiple topicalization in Germanic languages, predicate cleft constructions in Hebrew, Polish, Brazilian Portuguese, and Yiddish, remnant VP-fronting constructions in Polish, a difference between VP-ellipsis and pseudo-gapping in English, null object constructions in Hebrew, Tagalog, Russian, European Portuguese, Japanese, Bantu languages, Persian, and Serbo-Croatian, and yes/no reply constructions in Irish and Finnish.

The Cognitive Basis for Encoding and Navigating Linguistic Structure

Dan Parker investigates when we are and are not prone to illusions of grammaticality, comparing the online processing of anaphors and NPIs.

Linguistics

Non-ARHU Contributor(s): Daniel Parker
Dates:
This dissertation is concerned with the cognitive mechanisms that are used to encode and navigate linguistic structure. Successful language understanding requires mechanisms for efficiently encoding and navigating linguistic structure in memory. The timing and accuracy of linguistic dependency formation provides valuable insights into the cognitive basis of these mechanisms. Recent research on linguistic dependency formation has revealed a profile of selective fallibility: some linguistic dependencies are rapidly and accurately implemented, but others are not, giving rise to “linguistic illusions”. This profile is not expected under current models of grammar or language processing. The broad consensus, however, is that the profile of selective fallibility reflects dependency-based differences in memory access strategies, including the use of different retrieval mechanisms and the selective use of cues for different dependencies. In this dissertation, I argue that (i) the grain-size of variability is not dependency-type, and (ii) there is not a homogenous cause for linguistic illusions. Rather, I argue that the variability is a consequence of how the grammar interacts with general-purpose encoding and access mechanisms. To support this argument, I provide three types of evidence. First, I show how to “turn on” illusions for anaphor resolution, a phenomena that has resisted illusions in the past, reflecting a cue combinatorics scheme that prioritizes structural information in memory retrieval. Second, I show how to “turn off” a robust illusion for negative polarity item (NPI) licensing, reflecting access to the internal computations during the encoding and interpretation of emerging semantic/pragmatic representations. Third, I provide computational simulations that derive both the presence and absence of the illusions from within the same memory architecture. These findings lead to a new conception of how we mentally encode and navigate structured linguistic representations.