Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Basquing in Minimalism

Alex Drummond and Norbert Hornstein review a collection of conversations with Chomsky.

Linguistics

Contributor(s): Norbert Hornstein
Non-ARHU Contributor(s): Alex Drummond
Dates:
A review of **Minds and Language: A dialogue with Noam Chomsky in the Basque Country**, edited by Massimo Piatelli-Palmarini, [Juan Uriagereka](/~juan/), and Pello Salaburu, Oxford University Press, 2010.

Measuring and comparing individuals and events

"He drank more wine than I did and also danced more than I did." Alexis Wellwood gives a unified analysis for both adnominal and adverbal "more," with Valentine Hacquard and faculty visitor Roumyana Pancheva.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s): Alexis Wellwood, Roumyana Pancheva
Dates:
This squib investigates parallels between nominal and verbal comparatives. Building on key insights of Hackl (2000) and Bale & Barner (2009), we show that more behaves uniformly when it combines with nominal and verbal predicates: (i) it cannot combine with singular count NPs or perfective telic VPs; (ii) grammatical properties of the predicates determine the scale of comparison—plural marked NPs and habitual VPs are compared on a scale of cardinality, whereas mass NPs and perfective (atelic) VPs are (often) compared along non-cardinal, though monotonic, scales. Taken together, our findings confirm and strengthen parallels that have independently been drawn between the nominal and verbal domains. In addition, our discussion and data, drawn from English, Spanish, and Bulgarian, suggest that the semantic contribution of "more" can be given a uniform analysis.

Distributivity and modality: where "each" may go, "every" can't follow

A new syntactic account of scopal restrictions on universal quantifiers in sentences with an epistemic modal.

Linguistics

Non-ARHU Contributor(s): Michaël Gagnon, Alexis Wellwood
Dates:
Von Fintel and Iatridou (2003) observed a striking pattern of scopal non-interaction between phrases headed by strong quantifiers like every and epistemically interpreted modal auxiliaries. Tancredi (2007) and Huitink (2008) observed that von Fintel and Iatridou’s proposed constraint, the Epistemic Containment Principle (ECP), does not apply uniformly: it does not apply to strong quantifiers headed by each. We consider the ECP effect in light of the differential behavior of each and every in the environment of wh-, negative, and generic operators as described by Beghelli and Stowell (1997). Assuming that epistemic and root modals merge at two different syntactic heights (e.g. Cinque 1999) and that modals may act as unselective binders (Heim 1982), we extend Beghelli and Stowell’s topological approach to quantifier scope interactions in order to formulate a novel syntactic account of the ECP.

Poverty of the Stimulus Revisited

Countering recent critiques, Paul Pietroski and collaborators defend the idea that some invariances in human languages reflect an innate human endowment, as opposed to common experience.

Linguistics

Contributor(s): Paul Pietroski
Non-ARHU Contributor(s): Robert Berwick, Beracah Yankama, Noam Chomsky
Dates:
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect 'the innate schematism of mind that is applied to the data of experience' and that 'might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge'. Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various 'poverty of stimulus' (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire.

Selective learning the acquisition of Kannada ditransitives

Even young children have a highly abstract representation of ditransitive syntax.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Joshua Viau
Dates:
In this paper we bring evidence from language acquisition to bear on the debate over the relative abstractness of children’s grammatical knowledge. We first identify one aspect of syntactic representation that exhibits a range of syntactic, morphological and semantic consequences both within and across languages, namely the hierarchical structure of ditransitive verb phrases. While the semantic consequences of this structure are parallel in English, Kannada, and Spanish, the word order and morphological reflexes of this structure diverge. Next we demonstrate that children learning Kannada have command of the relation between morphological form and semantic interpretation in ditransitives with respect to quantifier-variable binding. Finally, we offer a proposal on how a selective learning mechanism might succeed in identifying the appropriate structures in this domain despite the variability in surface expression.

Structured Access in Sentence Comprehension

Structural cues are favored over lexical features in access to memory for resolution of agreement and reflexive anaphora: Brian Dillon makes the point with several experiments on English and Chinese, and a parsing strategy implemented in ACT-R.

Linguistics

Non-ARHU Contributor(s): Brian W. Dillon
Dates:
This thesis is concerned with the nature of memory access during the construction of long-distance dependencies in online sentence comprehension. In recent years, an intense focus on the computational challenges posed by long-distance dependencies has proven to be illuminating with respect to the characteristics of the architecture of the human sentence processor, suggesting a tight link between general memory access procedures and sentence processing routines (Lewis & Vasishth 2005; Lewis, Vasishth, & Van Dyke 2006; Wagers, Lau & Phillips 2009). The present thesis builds upon this line of research, and its primary aim is to motivate and defend the hypothesis that the parser accesses linguistic memory in an essentially structured fashion for certain long-distance dependencies. In order to make this case, I focus on the processing of reflexive and agreement dependencies, and ask whether or not non- structural information such as morphological features are used to gate memory access during syntactic comprehension. Evidence from eight experiments in a range of methodologies in English and Chinese is brought to bear on this question, providing arguments from interference effects and time-course effects that primarily syntactic information is used to access linguistic memory in the construction of certain long- distance dependencies. The experimental evidence for structured access is compatible with a variety of architectural assumptions about the parser, and I present one implementation of this idea in a parser based on the ACT-R memory architecture. In the context of such a content-addressable model of memory, the claim of structured access is equivalent to the claim that only syntactic cues are used to query memory. I argue that structured access reflects an optimal parsing strategy in the context of a noisy, interference-prone cognitive architecture: abstract structural cues are favored over lexical feature cues for certain structural dependencies in order to minimize memory interference in online processing.

Some arguments and non-arguments for reductionist accounts of syntactic phenomena

Can psycholinguistics tell us whether a syntactic pattern is explained by grammar or by processing? Colin Phillips explores the question in relation to island constraints, agreement attraction, constraints on anaphora, and comparatives.

Linguistics

Contributor(s): Colin Phillips
Dates:
Many syntactic phenomena have received competing accounts, either in terms of formal grammatical mechanisms, or in terms of independently motivated properties of language processing mechanisms (“reductionist” accounts). A variety of different types of argument have been put forward in efforts to distinguish these competing accounts. This article critically examines a number of arguments that have been offered as evidence in favour of formal or reductionist analyses, and concludes that some types of argument are more decisive than others. It argues that evidence from graded acceptability effects and from isomorphism between acceptability judgements and on-line comprehension profiles are less decisive. In contrast, clearer conclusions can be drawn from cases of overgeneration, where there is a discrepancy between acceptability judgements and the representations that are briefly constructed on-line, and from tests involving individual differences in cognitive capacity. Based on these arguments, the article concludes that a formal grammatical account is better supported in some domains, and that a reductionist account fares better in other domains. Phenomena discussed include island constraints, agreement attraction, constraints on anaphora, and comparatives.

Read More about Some arguments and non-arguments for reductionist accounts of syntactic phenomena

Syntactic and Semantic Predictors of Tense in Hindi: An ERP Investigation

Brian Dillon and Colin Phillips find different ERP signals for a grammatical error, depending on whether its detection was based on semantic versus morphosyntactic information.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Brian Dillon, Andrew Nevins, Alison C. Austin
Dates:
Although there is broad agreement that many ERP components reflect error signals generated during an unexpected linguistic event, there are least two distinct aspects of the process that the ERP signals may reflect. The first is the content of an error, which is the local discrepancy between an observed form and any expectations about upcoming forms, without any reference to why those expectations were held. The second aspect is the cause of an error, which is a context-aware analysis of why the error arose. The current study examines the processes involved in prediction of past tense marking on verbal morphology in Hindi. This is a case where an error with the same local characteristics can arise from very different cues, one syntactic in origin (ergative case marking), and the other semantic in origin (a past tense adverbial). Results suggest that the parser does indeed track the cause in addition to the content of errors. Despite the fact that the critical manipulation of verb tense marking was identical across cue types, the nature of the cue led to distinct patterns of ERPs in response to anomalous verbal morphology. When verb tense was predicted based upon semantic cues, an incorrect future tense form elicited an early negativity in the 200-400 ms interval with a posterior distribution. In contrast, when verb tense was predicted based upon morphosyntactic cues, an incorrect future tense form elicited a right-lateralized anterior negativity (RAN) during the 300-500 ms interval, as well as a P600 response with a broad distribution.

Read More about Syntactic and Semantic Predictors of Tense in Hindi: An ERP Investigation

Sentence and Word Complexity

Do we learn different kinds of linguistic structure differently?

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Jeffrey Heinz
Dates:
Our understanding of human learning is increasingly informed by findings from multiple fields—psychology, neuroscience, computer science, linguistics, and education. A convergence of insights is forging a “new science of learning” within cognitive science, which promises to play a key role in developing intelligent machines (1, 2). A long-standing fundamental issue in theories of human learning is whether there are specialized learning mechanisms for certain tasks or spheres of activity (domains). For example, is learning how to open a door (turning the handle before pulling) the same kind of “learning” as putting up and taking down scaffolding (where disassembly must be done in the reverse order of assembly)? Surprisingly, this issue plays out within the domain of human language.

Multi-Level Audio-Visual Interactions in Speech and Language Perception

Visual information may impact auditory processing, as in the McGurk effect. Ariane Rhone investigates.

Linguistics

Non-ARHU Contributor(s): Ariane Rhone
Dates:
That we perceive our environment as a unified scene rather than individual streams ofauditory, visual, and other sensory information has recently provided motivation tomove past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primaryareas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has becomean increasingly wide field of research in recent decades, with the introduction andincreased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions,and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantagethat is measurable in behavioral response times and evoked auditory fields (Chapter3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, thecombination can often, but not always, combine to form a third, non-physicallypresent percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexicalproperties of the auditory and visual stimulus may affect the likelihood of obtainingthe illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing.