Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Height-Relative Determination of (Non-Root) Modal Flavor: Evidence from Hindi

The Hindi future marker "gaa" has a variety of interpretations. Dave Kush shows how these correspond to different syntactic contexts.

Linguistics

Non-ARHU Contributor(s): Dave Kush
Dates:
In this paper I pursue the idea that a modal's flavor is determined by its attachment height. The various interpretations of the Hindi future marker gaa, which is taken to be a modal, are discussed. The idea put forth is that modal flavor is indirectly constrained by the semantic type of the modal’s prejacent instead of being solely determined via contextual assignment. Modal Bases are re-envisioned as being comprised of different types of alternatives (worlds, world-time pairs, etc.), rather than just sets of worlds determined by different accessibility relations. The correlation between height and attachment site falls out as a consequence of the different types of alternatives Modal Bases make available for semantic computation.

Seeing what you mean, mostly

How is perception of numerosity related to the meaning of words like "most"? Paul Pietroski, Jeff Lidz and collaborators explore the question experimentally.

Linguistics

Non-ARHU Contributor(s): Tim Hunter, Darko Odic, Justin Halberda
Dates:
Publisher: Emerald
Idealizing, a speaker endorses or rejects a (declarative) sentence S in a situation s based on how she understands S and represents s. But relatively little is known about how speakers represent situations. Linguists can construct and test initial models of semantic competence, by supposing that sentences have representation-neutral truth conditions, which speakers represent somehow; cf. Marr's (1982) Level One description of a function computed, as opposed to a Level Two description of an algorithm that computes outputs given inputs. But this leaves interesting questions unsettled. One would like to find cases in which S can be held fixed, while modifying s in ways that have predictable effects on the nonlinguistic cognitive systems recruited to evaluate S. Extant work in perceptual psychology offers opportunities for eliciting judgments from speakers in highly controlled settings where something is known about the cognitive systems that speakers recruit when endorsing or rejecting a target sentence. In such settings, behavioral data can reveal aspects of how the human language system interfaces with other systems of cognition that are presumably shared with other species. As an illustration, we focus on the quantificational word “most” and how perception of numerosity is related to the meaning of “Most of the dots are blue,” in the hope that studies of other perceptual systems may provide analogous opportunities for investigating how words are related to prelinguistic representations.

The adicities of thematic separation

A syntax suited to a neo-Davidsonian semantics, where each dependent is interpreted as a conjunct.

Linguistics

Non-ARHU Contributor(s): Terje Lohndal
Dates:
This paper discusses whether or not verbs have thematic arguments or whether they just have an event variable. The paper discusses some evidence in favor of the Neo-Davidsonian position that verbs only have an event variable. Based on this evidence, the paper develops a transparent mapping hypothesis from syntax to logical form where each Spell-Out domain corresponds to a conjunct at logical form. The paper closes by discussing the nature of compositionality for a Conjunctivist semantics.

Interrogatives, Instructions, and I-languages: An I-Semantics for Questions

An internalist semantics for interrogative clauses, from Terje Lohndal and Paul Pietroski.

Linguistics

Contributor(s): Paul Pietroski
Non-ARHU Contributor(s): Terje Lohndal
Dates:
It is often said that the meaning of an interrogative sentence is a set of answers. This raises questions about how the meaning of an interrogative is compositionally determined, especially if one adopts an I-language perspective. By contrast, we argue that I-languages generate semantic instructions (SEMs) for how to assemble concepts of a special sort and then prepare these concepts for various uses - e.g., in declaring, querying, or assembling concepts of still further complexity. We connect this abstract conception of meaning to a specific (minimalist) conception of complementizer phrase edges, with special attention to wh-questions and their relative clause counterparts. The proposed syntax and semantics illustrates a more general conception of edges and their relation to the so-called duality of semantics.

Basquing in Minimalism

Alex Drummond and Norbert Hornstein review a collection of conversations with Chomsky.

Linguistics

Contributor(s): Norbert Hornstein
Non-ARHU Contributor(s): Alex Drummond
Dates:
A review of **Minds and Language: A dialogue with Noam Chomsky in the Basque Country**, edited by Massimo Piatelli-Palmarini, [Juan Uriagereka](/~juan/), and Pello Salaburu, Oxford University Press, 2010.

Measuring and comparing individuals and events

"He drank more wine than I did and also danced more than I did." Alexis Wellwood gives a unified analysis for both adnominal and adverbal "more," with Valentine Hacquard and faculty visitor Roumyana Pancheva.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s): Alexis Wellwood, Roumyana Pancheva
Dates:
This squib investigates parallels between nominal and verbal comparatives. Building on key insights of Hackl (2000) and Bale & Barner (2009), we show that more behaves uniformly when it combines with nominal and verbal predicates: (i) it cannot combine with singular count NPs or perfective telic VPs; (ii) grammatical properties of the predicates determine the scale of comparison—plural marked NPs and habitual VPs are compared on a scale of cardinality, whereas mass NPs and perfective (atelic) VPs are (often) compared along non-cardinal, though monotonic, scales. Taken together, our findings confirm and strengthen parallels that have independently been drawn between the nominal and verbal domains. In addition, our discussion and data, drawn from English, Spanish, and Bulgarian, suggest that the semantic contribution of "more" can be given a uniform analysis.

Distributivity and modality: where "each" may go, "every" can't follow

A new syntactic account of scopal restrictions on universal quantifiers in sentences with an epistemic modal.

Linguistics

Non-ARHU Contributor(s): Michaël Gagnon, Alexis Wellwood
Dates:
Von Fintel and Iatridou (2003) observed a striking pattern of scopal non-interaction between phrases headed by strong quantifiers like every and epistemically interpreted modal auxiliaries. Tancredi (2007) and Huitink (2008) observed that von Fintel and Iatridou’s proposed constraint, the Epistemic Containment Principle (ECP), does not apply uniformly: it does not apply to strong quantifiers headed by each. We consider the ECP effect in light of the differential behavior of each and every in the environment of wh-, negative, and generic operators as described by Beghelli and Stowell (1997). Assuming that epistemic and root modals merge at two different syntactic heights (e.g. Cinque 1999) and that modals may act as unselective binders (Heim 1982), we extend Beghelli and Stowell’s topological approach to quantifier scope interactions in order to formulate a novel syntactic account of the ECP.

Poverty of the Stimulus Revisited

Countering recent critiques, Paul Pietroski and collaborators defend the idea that some invariances in human languages reflect an innate human endowment, as opposed to common experience.

Linguistics

Contributor(s): Paul Pietroski
Non-ARHU Contributor(s): Robert Berwick, Beracah Yankama, Noam Chomsky
Dates:
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect 'the innate schematism of mind that is applied to the data of experience' and that 'might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge'. Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various 'poverty of stimulus' (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire.

Selective learning the acquisition of Kannada ditransitives

Even young children have a highly abstract representation of ditransitive syntax.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Joshua Viau
Dates:
In this paper we bring evidence from language acquisition to bear on the debate over the relative abstractness of children’s grammatical knowledge. We first identify one aspect of syntactic representation that exhibits a range of syntactic, morphological and semantic consequences both within and across languages, namely the hierarchical structure of ditransitive verb phrases. While the semantic consequences of this structure are parallel in English, Kannada, and Spanish, the word order and morphological reflexes of this structure diverge. Next we demonstrate that children learning Kannada have command of the relation between morphological form and semantic interpretation in ditransitives with respect to quantifier-variable binding. Finally, we offer a proposal on how a selective learning mechanism might succeed in identifying the appropriate structures in this domain despite the variability in surface expression.

Structured Access in Sentence Comprehension

Structural cues are favored over lexical features in access to memory for resolution of agreement and reflexive anaphora: Brian Dillon makes the point with several experiments on English and Chinese, and a parsing strategy implemented in ACT-R.

Linguistics

Non-ARHU Contributor(s): Brian W. Dillon
Dates:
This thesis is concerned with the nature of memory access during the construction of long-distance dependencies in online sentence comprehension. In recent years, an intense focus on the computational challenges posed by long-distance dependencies has proven to be illuminating with respect to the characteristics of the architecture of the human sentence processor, suggesting a tight link between general memory access procedures and sentence processing routines (Lewis & Vasishth 2005; Lewis, Vasishth, & Van Dyke 2006; Wagers, Lau & Phillips 2009). The present thesis builds upon this line of research, and its primary aim is to motivate and defend the hypothesis that the parser accesses linguistic memory in an essentially structured fashion for certain long-distance dependencies. In order to make this case, I focus on the processing of reflexive and agreement dependencies, and ask whether or not non- structural information such as morphological features are used to gate memory access during syntactic comprehension. Evidence from eight experiments in a range of methodologies in English and Chinese is brought to bear on this question, providing arguments from interference effects and time-course effects that primarily syntactic information is used to access linguistic memory in the construction of certain long- distance dependencies. The experimental evidence for structured access is compatible with a variety of architectural assumptions about the parser, and I present one implementation of this idea in a parser based on the ACT-R memory architecture. In the context of such a content-addressable model of memory, the claim of structured access is equivalent to the claim that only syntactic cues are used to query memory. I argue that structured access reflects an optimal parsing strategy in the context of a noisy, interference-prone cognitive architecture: abstract structural cues are favored over lexical feature cues for certain structural dependencies in order to minimize memory interference in online processing.