Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Parser-Grammar Transparency and the Development of Syntactic Dependencies

Learning a grammar is sufficient for learning to parse.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

A fundamental question in psycholinguistics concerns how grammatical structure contributes to real-time sentence parsing and understanding. While many argue that grammatical structure is only loosely related to on-line parsing, others hold the view that the two are tightly linked. Here, I use the incremental growth of grammatical structure in developmental time to demonstrate that as new grammatical knowledge becomes available to children, they use that knowledge in their incremental parsing decisions. Given the tight link between the acquisition of new knowledge and the use of that knowledge in recognizing sentence structure, I argue in favor of a tight link between grammatical structure and parsing mechanics.

Read More about Parser-Grammar Transparency and the Development of Syntactic Dependencies

On substance and Substance-Free Phonology: Where we are at and where we are going

On the abstractness of phonology.

Linguistics

Contributor(s): Alex Chabot
Dates:

In this introduction [to this special issue of the journal, on substance-free phonology], I will briefly trace the development of features in phonological theory, with particular emphasis on their relationship to phonetic substance. I will show that substance-free phonology is, in some respects, the resurrection of a concept that was fundamental to early structuralist views of features as symbolic markers, whose phonological role eclipses any superficial correlates to articulatory or acoustic objects. In the process, I will highlight some of the principal questions that this epistemological tack raises, and how the articles in this volume contribute to our understanding of those questions

Read More about On substance and Substance-Free Phonology: Where we are at and where we are going

Underspecification in time

Abstracting away from linear order in phonology.

Linguistics

Contributor(s): William Idsardi
Dates:

Substance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.

Read More about Underspecification in time

Structure

"Reconstructing Structure" delves into how natural and linguistic phenomena are organized with quasi-periodic patterns, exploring the implications for understanding human language.

College of Arts and Humanities, Linguistics

Author/Lead: Howard Lasnik, Juan Uriagereka
Dates:
Publisher: MIT Press

Cover of "Structure" by Howard Lasnik and Juan Uriagereka.

Natural phenomena, including human language, are not just series of events but are organized quasi-periodically; sentences have structure, and that structure matters.

Howard Lasnik and Juan Uriagereka “were there” when generative grammar was being developed into the Minimalist Program. In this presentation of the universal aspects of human language as a cognitive phenomenon, they rationally reconstruct syntactic structure. In the process, they touch upon structure dependency and its consequences for learnability, nuanced arguments (including global ones) for structure presupposed in standard linguistic analyses, and a formalism to capture long-range correlations. For practitioners, the authors assess whether “all we need is Merge,” while for outsiders, they summarize what needs to be covered when attempting to have structure “emerge.”

Reconstructing the essential history of what is at stake when arguing for sentence scaffolding, the authors cover a range of larger issues, from the traditional computational notion of structure (the strong generative capacity of a system) and how far down into words it reaches to whether its variants, as evident across the world's languages, can arise from non-generative systems. While their perspective stems from Noam Chomsky's work, it does so critically, separating rhetoric from results. They consider what they do to be empirical, with the formalism being only a tool to guide their research (of course, they want sharp tools that can be falsified and have predictive power). Reaching out to skeptics, they invite potential collaborations that could arise from mutual examination of one another's work, as they attempt to establish a dialogue beyond generative grammar.

Read More about Structure

Structure. Concepts, Consequences, Interactions

Natural phenomena, including human language, are not just series of events but are organized quasi-periodically; sentences have structure, and that structure matters.

School of Languages, Literatures, and Cultures, Linguistics

Author/Lead: Juan Uriagereka, Howard Lasnik
Dates:
book cover for Structure - Concepts, Consequences, Interactions

Howard Lasnik and Juan Uriagereka “were there” when generative grammar was being developed into the Minimalist Program. In this presentation of the universal aspects of human language as a cognitive phenomenon, they rationally reconstruct syntactic structure. In the process, they touch upon structure dependency and its consequences for learnability, nuanced arguments (including global ones) for structure presupposed in standard linguistic analyses, and a formalism to capture long-range correlations. For practitioners, the authors assess whether “all we need is Merge,” while for outsiders, they summarize what needs to be covered when attempting to have structure “emerge.”

Reconstructing the essential history of what is at stake when arguing for sentence scaffolding, the authors cover a range of larger issues, from the traditional computational notion of structure (the strong generative capacity of a system) and how far down into words it reaches, to whether its variants, as evident across the world's languages, can arise from non-generative systems. While their perspective stems from Noam Chomsky's work, it does so critically, separating rhetoric from results. They consider what they do to be empirical, with the formalism being only a tool to guide their research (of course, they want sharp tools that can be falsified and have predictive power). Reaching out to sceptics, they invite potential collaborations that could arise from mutual examination of one another's work, as they attempt to establish a dialogue beyond generative grammar.

Read More about Structure. Concepts, Consequences, Interactions

Events in Semantics

Event Semantics says that clauses in natural languages are descriptions of events. Why believe this?

Linguistics, Philosophy

Contributor(s): Alexander Williams
Dates:
Publisher: The Cambridge Handbook of the Philosophy of Language

Event Semantics (ES) says that clauses in natural languages are descriptions of events. Why believe this? The answer cannot be that we use clauses to talk about events, or that events are important in ontology or psychology. Other sorts of things have the same properties, but no special role in semantics. The answer must be that this view helps to explain the semantics of natural languages. But then, what is it to explain the semantics of natural languages? Here there are many approaches, differing on whether natural languages are social and objective or individual and mental; whether the semantics delivers truth values at contexts or just constraints on truth-evaluable thoughts; which inferences it should explain as formally provable, if any; and which if any grammatical patterns it should explain directly. The argument for ES will differ accordingly, as will the consequences, for ontology, psychology, or linguistics, of its endorsement. In this chapter I trace the outlines of this story, sketching four distinct arguments for the analysis that ES makes possible: with it we can treat a dependent phrase and its syntactic host as separate predicates of related or identical events. Analysis of this kind allows us to state certain grammatical generalizations, formalize patterns of entailment, provide an extensional semantics for adverbs, and most importantly to derive certain sentence meanings that are not easily derived otherwise. But in addition, it will systematically validate inferences that are unsound, if we think conventionally about events and semantics. The moral is, with ES we cannot maintain both an ordinary metaphysics and a truth-conditional semantics that is simple. Those who would accept ES, and draw conclusions about the world or how we view it, must therefore choose which concession to make. I discuss four notable choices.

Read More about Events in Semantics

Figuring out root and epistemic uses of modals: The role of input

How children use temporal orientation to infer which uses of modals are epistemic and which are not.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s):

Annemarie van Dooren *20, Anouk Dieuleveut *21, Ailís Cournane (NYU)

Dates:

This paper investigates how children figure out that modals like must can be used to express both epistemic and “root” (i.e. non epistemic) flavors. The existing acquisition literature shows that children produce modals with epistemic meanings up to a year later than with root meanings. We conducted a corpus study to examine how modality is expressed in speech to and by young children, to investigate the ways in which the linguistic input children hear may help or hinder them in uncovering the flavor flexibility of modals. Our results show that the way parents use modals may obscure the fact that they can express epistemic flavors: modals are very rarely used epistemically. Yet, children eventually figure it out; our results suggest that some do so even before age 3. To investigate how children pick up on epistemic flavors, we explore distributional cues that distinguish roots and epistemics. The semantic literature argues they differ in “temporal orientation” (Condoravdi, 2002): while epistemics can have present or past orientation, root modals tend to be constrained to future orientation (Werner 2006Klecha, 2016Rullmann & Matthewson, 2018). We show that in child-directed speech, this constraint is well-reflected in the distribution of aspectual features of roots and epistemics, but that the signal might be weak given the strong usage bias towards roots. We discuss (a) what these results imply for how children might acquire adult-like modal representations, and (b) possible learning paths towards adult-like modal representations.

Read More about Figuring out root and epistemic uses of modals: The role of input

Naturalistic speech supports distributional learning across contexts

Infants can learn what acoustic dimensions contrastive by attending to phonetic context.

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s):

Kasia Hitczenko *19

Dates:

At birth, infants discriminate most of the sounds of the world’s languages, but by age 1, infants become language-specific listeners. This has generally been taken as evidence that infants have learned which acoustic dimensions are contrastive, or useful for distinguishing among the sounds of their language(s), and have begun focusing primarily on those dimensions when perceiving speech. However, speech is highly variable, with different sounds overlapping substantially in their acoustics, and after decades of research, we still do not know what aspects of the speech signal allow infants to differentiate contrastive from noncontrastive dimensions. Here we show that infants could learn which acoustic dimensions of their language are contrastive, despite the high acoustic variability. Our account is based on the cross-linguistic fact that even sounds that overlap in their acoustics differ in the contexts they occur in. We predict that this should leave a signal that infants can pick up on and show that acoustic distributions indeed vary more by context along contrastive dimensions compared with noncontrastive dimensions. By establishing this difference, we provide a potential answer to how infants learn about sound contrasts, a question whose answer in natural learning environments has remained elusive.

Read More about Naturalistic speech supports distributional learning across contexts

Finding the force: How children discern possibility and necessity modals

How children discern possibility and necessity modals

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s):

Anouk Dieuleveut *21, Annemarie van Dooren *20, Ailís Counane (NYU)

Dates:

This paper investigates when and how children figure out the force of modals: that possibility modals (e.g., can/might) express possibility, and necessity modals (e.g., must/have to) express necessity. Modals raise a classic subset problem: given that necessity entails possibility, what prevents learners from hypothesizing possibility meanings for necessity modals? Three solutions to such subset problems can be found in the literature: the first is for learners to rely on downward-entailing (DE) environments (Gualmini and Schwarz in J. Semant. 26(2):185–215, 2009); the second is a bias for strong (here, necessity) meanings; the third is for learners to rely on pragmatic cues stemming from the conversational context (Dieuleveut et al. in Proceedings of the 2019 Amsterdam colloqnium, pp. 111–122, 2019a; Rasin and Aravind in Nat. Lang. Semant. 29:339–375, 2020). This paper assesses the viability of each of these solutions by examining the modals used in speech to and by 2-yearold children, through a combination of corpus studies and experiments testing the guessability of modal force based on their context of use. Our results suggest that, given the way modals are used in speech to children, the first solution is not viable and the second is unnecessary. Instead, we argue that the conversational context in which modals occur is highly informative as to their force and sufficient, in principle, to sidestep the subset problem. Our child results further suggest an early mastery of possibility—but not necessity—modals and show no evidence for a necessity bias.

Read More about Finding the force: How children discern possibility and necessity modals

Lexicalization in the developing parser

Children make syntactic predictions based on the syntactic distributions of specific verbs, but do not assume that the patterns can be generalized.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Aaron Steven White *15 (University of Rochester)

Dates:

We use children's noun learning as a probe into the nature of their syntactic prediction mechanism and the statistical knowledge on which that prediction mechanism is based. We focus on verb-based predictions, considering two possibilities: children's syntactic predictions might rely on distributional knowledge about specific verbs–i.e. they might be lexicalized–or they might rely on distributional knowledge that is general to all verbs. In an intermodal preferential looking experiment, we establish that verb-based predictions are lexicalized: children encode the syntactic distributions of specific verbs and use those distributions to make predictions, but they do not assume that these can be assumed of verbs in general.

Read More about Lexicalization in the developing parser