Skip to main content
Skip to main content

Syntax

Syntax seeks to characterize grammars of particular languages and how they differ, to describe the universal properties that human grammars have as a matter of biological design, and to explain why the universal properties we discover have the particular character they do.

Birds fly, fish swim, humans speak. We have a capacity to combine expressions into unboundedly large linguistic structures (sentences and phrases) that carry a specific form and a specific meaning. As the number of such structures is in principle infinite, there must be recursive procedures that define these complex objects. Syntax studies these rule systems — grammars — and does so in three ways. It seeks to characterize grammars of particular languages and how they differ (e.g. how questions are formed in English versus Chinese); to describe the universal properties that human grammars have as a matter of biological design (e.g. why no human grammars have mirror image rules); and, most recently, to explain why the universal properties we discover have the particular character they do.
 
The syntax group engages in all three kinds of research, with special emphasis on the third, typically minimalist question. Empirically, the syntax group has done extensive work on case, agreement, ellipsis, movement and islands, control, anaphoric binding, applicative constructions, morphosyntax, linearization, binding and quantifier scope, among others. Furthermore, while we aim to be at the forefront of syntactic theory (particularly within the minimalist program), we constantly aim, in our classes and in our research, to find insight from earlier generative models developed over the past 60 years.
 
The Syntax/Semantics Lab meets once or twice a month, bringing together students, faculty, postdocs and visitors to discuss works in progress.

Primary Faculty

Tonia Bleam

Senior Lecturer, Director of Undergrad. Studies , Linguistics
Member, Maryland Language Science Center

1401 E Marie Mount Hall
College Park MD, 20742

(301) 405-4930

Aron Hirsch

Assistant Professor, Linguistics

301-405-7002

Juan Uriagereka

Professor, School of Languages, Literatures, and Cultures
Professor, Spanish and Portuguese
Professor, Linguistics
Member, Maryland Language Science Center

4225 Jiménez Hall
College Park MD, 20742

Secondary Faculty

Omar Agha

Assistant Professor, Linguistics

Marie Mount Hall
College Park MD, 20742

Valentine Hacquard

Professor, Linguistics
Affliliate Professor, Philosophy
Member, Maryland Language Science Center

1401 F Marie Mount Hall
College Park MD, 20742

(301) 405-4935

Jeffrey Lidz

Professor and Chair, Linguistics
Member, Maryland Language Science Center

1413 Marie Mount Hall
College Park MD, 20742

(301) 405-8220

Kate Mooney

Assistant Professor, Linguistics
Member, Maryland Language Science Center

Marie Mount Hall
College Park MD, 20742

Colin Phillips

Professor, Distinguished Scholar-Teacher, Linguistics
Member, Maryland Language Science Center

Director, Language Science Center

1413 F Marie Mount Hall
College Park MD, 20742

(301) 405-3082

Alexander Williams

Associate Professor, Linguistics
Associate Professor, Philosophy
Member, Maryland Language Science Center

1401 D Marie Mount Hall
College Park MD, 20742

(301) 405-1607

Emeritus Faculty

Norbert Hornstein

Professor Emeritus, Linguistics

3416 G Marie Mount Hall
College Park MD, 20742

(301) 405-4932

Howard Lasnik

Professor Emeritus, Linguistics
Member, Maryland Language Science Center

Maria Polinsky

Professor Emerita, Linguistics
Affiliate Faculty, Latin American and Caribbean Studies Center

1417 A Marie Mount Hall
College Park MD, 20742

Moving away from lexicalism in psycho- and neuro-linguistics

Moving away from lexicalism in psycho- and neuro-linguistics

Linguistics

Contributor(s): Ellen Lau, Alex Krauska
Dates:

In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

Read More about Moving away from lexicalism in psycho- and neuro-linguistics

A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Is processing subject-relative clauses easier even in an ergative language?

Linguistics

Contributor(s): Ellen Lau, Maria Polinsky
Non-ARHU Contributor(s): Nancy Clarke, Michaela Socolof, Rusudan Asatiani
Dates:

A fascinating descriptive property of human language processing whose explanation is still debated is that subject-gap relative clauses are easier to process than object-gap relative clauses, across a broad range of languages with different properties. However, recent work suggests that this generalization does not hold in Basque, an ergative language, and has motivated an alternative generalization in which the preference is for gaps in morphologically unmarked positions—subjects in nominative-accusative languages, and objects and intransitive subjects in ergative-absolutive languages. Here we examined whether this generalization extends to another ergative-absolutive language, Georgian. ERP and self-paced reading results show a large anterior negativity and slower reading times when a relative clause is disambiguated to an object relative vs a subject relative. These data thus suggest that in at least some ergative-absolutive languages, the classic descriptive generalization—that object relative clauses are more costly than subject relative clauses—still holds.

Read More about A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Parser-Grammar Transparency and the Development of Syntactic Dependencies

Learning a grammar is sufficient for learning to parse.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

A fundamental question in psycholinguistics concerns how grammatical structure contributes to real-time sentence parsing and understanding. While many argue that grammatical structure is only loosely related to on-line parsing, others hold the view that the two are tightly linked. Here, I use the incremental growth of grammatical structure in developmental time to demonstrate that as new grammatical knowledge becomes available to children, they use that knowledge in their incremental parsing decisions. Given the tight link between the acquisition of new knowledge and the use of that knowledge in recognizing sentence structure, I argue in favor of a tight link between grammatical structure and parsing mechanics.

Read More about Parser-Grammar Transparency and the Development of Syntactic Dependencies