Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Underspecification in time

Abstracting away from linear order in phonology.

Linguistics

Contributor(s): William Idsardi
Dates:

Substance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.

Read More about Underspecification in time

Structure. Concepts, Consequences, Interactions

Natural phenomena, including human language, are not just series of events but are organized quasi-periodically; sentences have structure, and that structure matters.

School of Languages, Literatures, and Cultures, Linguistics

Author/Lead: Juan Uriagereka, Howard Lasnik
Dates:
book cover for Structure - Concepts, Consequences, Interactions

Howard Lasnik and Juan Uriagereka “were there” when generative grammar was being developed into the Minimalist Program. In this presentation of the universal aspects of human language as a cognitive phenomenon, they rationally reconstruct syntactic structure. In the process, they touch upon structure dependency and its consequences for learnability, nuanced arguments (including global ones) for structure presupposed in standard linguistic analyses, and a formalism to capture long-range correlations. For practitioners, the authors assess whether “all we need is Merge,” while for outsiders, they summarize what needs to be covered when attempting to have structure “emerge.”

Reconstructing the essential history of what is at stake when arguing for sentence scaffolding, the authors cover a range of larger issues, from the traditional computational notion of structure (the strong generative capacity of a system) and how far down into words it reaches, to whether its variants, as evident across the world's languages, can arise from non-generative systems. While their perspective stems from Noam Chomsky's work, it does so critically, separating rhetoric from results. They consider what they do to be empirical, with the formalism being only a tool to guide their research (of course, they want sharp tools that can be falsified and have predictive power). Reaching out to sceptics, they invite potential collaborations that could arise from mutual examination of one another's work, as they attempt to establish a dialogue beyond generative grammar.

Read More about Structure. Concepts, Consequences, Interactions

Events in Semantics

Event Semantics says that clauses in natural languages are descriptions of events. Why believe this?

Linguistics, Philosophy

Contributor(s): Alexander Williams
Dates:
Publisher: The Cambridge Handbook of the Philosophy of Language

Event Semantics (ES) says that clauses in natural languages are descriptions of events. Why believe this? The answer cannot be that we use clauses to talk about events, or that events are important in ontology or psychology. Other sorts of things have the same properties, but no special role in semantics. The answer must be that this view helps to explain the semantics of natural languages. But then, what is it to explain the semantics of natural languages? Here there are many approaches, differing on whether natural languages are social and objective or individual and mental; whether the semantics delivers truth values at contexts or just constraints on truth-evaluable thoughts; which inferences it should explain as formally provable, if any; and which if any grammatical patterns it should explain directly. The argument for ES will differ accordingly, as will the consequences, for ontology, psychology, or linguistics, of its endorsement. In this chapter I trace the outlines of this story, sketching four distinct arguments for the analysis that ES makes possible: with it we can treat a dependent phrase and its syntactic host as separate predicates of related or identical events. Analysis of this kind allows us to state certain grammatical generalizations, formalize patterns of entailment, provide an extensional semantics for adverbs, and most importantly to derive certain sentence meanings that are not easily derived otherwise. But in addition, it will systematically validate inferences that are unsound, if we think conventionally about events and semantics. The moral is, with ES we cannot maintain both an ordinary metaphysics and a truth-conditional semantics that is simple. Those who would accept ES, and draw conclusions about the world or how we view it, must therefore choose which concession to make. I discuss four notable choices.

Read More about Events in Semantics

Figuring out root and epistemic uses of modals: The role of input

How children use temporal orientation to infer which uses of modals are epistemic and which are not.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s):

Annemarie van Dooren *20, Anouk Dieuleveut *21, Ailís Cournane (NYU)

Dates:

This paper investigates how children figure out that modals like must can be used to express both epistemic and “root” (i.e. non epistemic) flavors. The existing acquisition literature shows that children produce modals with epistemic meanings up to a year later than with root meanings. We conducted a corpus study to examine how modality is expressed in speech to and by young children, to investigate the ways in which the linguistic input children hear may help or hinder them in uncovering the flavor flexibility of modals. Our results show that the way parents use modals may obscure the fact that they can express epistemic flavors: modals are very rarely used epistemically. Yet, children eventually figure it out; our results suggest that some do so even before age 3. To investigate how children pick up on epistemic flavors, we explore distributional cues that distinguish roots and epistemics. The semantic literature argues they differ in “temporal orientation” (Condoravdi, 2002): while epistemics can have present or past orientation, root modals tend to be constrained to future orientation (Werner 2006Klecha, 2016Rullmann & Matthewson, 2018). We show that in child-directed speech, this constraint is well-reflected in the distribution of aspectual features of roots and epistemics, but that the signal might be weak given the strong usage bias towards roots. We discuss (a) what these results imply for how children might acquire adult-like modal representations, and (b) possible learning paths towards adult-like modal representations.

Read More about Figuring out root and epistemic uses of modals: The role of input

Naturalistic speech supports distributional learning across contexts

Infants can learn what acoustic dimensions contrastive by attending to phonetic context.

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s):

Kasia Hitczenko *19

Dates:

At birth, infants discriminate most of the sounds of the world’s languages, but by age 1, infants become language-specific listeners. This has generally been taken as evidence that infants have learned which acoustic dimensions are contrastive, or useful for distinguishing among the sounds of their language(s), and have begun focusing primarily on those dimensions when perceiving speech. However, speech is highly variable, with different sounds overlapping substantially in their acoustics, and after decades of research, we still do not know what aspects of the speech signal allow infants to differentiate contrastive from noncontrastive dimensions. Here we show that infants could learn which acoustic dimensions of their language are contrastive, despite the high acoustic variability. Our account is based on the cross-linguistic fact that even sounds that overlap in their acoustics differ in the contexts they occur in. We predict that this should leave a signal that infants can pick up on and show that acoustic distributions indeed vary more by context along contrastive dimensions compared with noncontrastive dimensions. By establishing this difference, we provide a potential answer to how infants learn about sound contrasts, a question whose answer in natural learning environments has remained elusive.

Read More about Naturalistic speech supports distributional learning across contexts

Finding the force: How children discern possibility and necessity modals

How children discern possibility and necessity modals

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s):

Anouk Dieuleveut *21, Annemarie van Dooren *20, Ailís Counane (NYU)

Dates:

This paper investigates when and how children figure out the force of modals: that possibility modals (e.g., can/might) express possibility, and necessity modals (e.g., must/have to) express necessity. Modals raise a classic subset problem: given that necessity entails possibility, what prevents learners from hypothesizing possibility meanings for necessity modals? Three solutions to such subset problems can be found in the literature: the first is for learners to rely on downward-entailing (DE) environments (Gualmini and Schwarz in J. Semant. 26(2):185–215, 2009); the second is a bias for strong (here, necessity) meanings; the third is for learners to rely on pragmatic cues stemming from the conversational context (Dieuleveut et al. in Proceedings of the 2019 Amsterdam colloqnium, pp. 111–122, 2019a; Rasin and Aravind in Nat. Lang. Semant. 29:339–375, 2020). This paper assesses the viability of each of these solutions by examining the modals used in speech to and by 2-yearold children, through a combination of corpus studies and experiments testing the guessability of modal force based on their context of use. Our results suggest that, given the way modals are used in speech to children, the first solution is not viable and the second is unnecessary. Instead, we argue that the conversational context in which modals occur is highly informative as to their force and sufficient, in principle, to sidestep the subset problem. Our child results further suggest an early mastery of possibility—but not necessity—modals and show no evidence for a necessity bias.

Read More about Finding the force: How children discern possibility and necessity modals

Lexicalization in the developing parser

Children make syntactic predictions based on the syntactic distributions of specific verbs, but do not assume that the patterns can be generalized.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Aaron Steven White *15 (University of Rochester)

Dates:

We use children's noun learning as a probe into the nature of their syntactic prediction mechanism and the statistical knowledge on which that prediction mechanism is based. We focus on verb-based predictions, considering two possibilities: children's syntactic predictions might rely on distributional knowledge about specific verbs–i.e. they might be lexicalized–or they might rely on distributional knowledge that is general to all verbs. In an intermodal preferential looking experiment, we establish that verb-based predictions are lexicalized: children encode the syntactic distributions of specific verbs and use those distributions to make predictions, but they do not assume that these can be assumed of verbs in general.

Read More about Lexicalization in the developing parser

Semantics and Pragmatics in a Modular Mind

Is semantics a modular part of the mind?

Philosophy

Non-ARHU Contributor(s):

Michael McCourt *21

Dates:

This dissertation asks how we should understand the distinction between semantic and pragmatic aspects of linguistic understanding within the framework of mentalism, on which the study of language is a branch of psychology. In particular, I assess a proposal on which the distinction between semantics and pragmatics is ultimately grounded in the modularity or encapsulation of semantic processes. While pragmatic processes involved in understanding the communicative intentions of a speaker are non-modular and highly inferential, semantic processes involved in understanding the meaning of an expression are modular and encapsulated from top-down influences of general cognition. The encapsulation hypothesis for semantics is attractive, since it would allow the semantics-pragmatics distinction to cut a natural joint in the communicating mind. However, as I argue, the case in favor of the modularity hypothesis for semantics is not particularly strong. Many of the arguments offered in its support are unsuccessful. I therefore carefully assess the relevant experimental record, in rapport with parallel debates about modular processing in other domains, such as vision. I point to several observations that raise a challenge for the encapsulation hypothesis for semantics; and I recommend consideration of alternative notions of modularity. However, I also demonstrate some principled strategies that proponents of the encapsulation hypothesis might deploy in order to meet the empirical challenge that I raise. I conclude that the available data neither falsify nor support the modularity hypothesis for semantics, and accordingly I develop several strategies that might be pursued in future work. It has also been argued that the encapsulation of semantic processing would entail (or otherwise strongly recommend) a particular approach to word meaning. However, in rapport with the literature on polysemy—a phenomenon whereby a single word can be used to express several related concepts, but not due to generality—I show that such arguments are largely unsuccessful. Again, I develop strategies that might be used, going forward, to adjudicate among the options regarding word meaning within a mentalistic linguistics.

Read More about Semantics and Pragmatics in a Modular Mind

Logic and the lexicon: Insights from modality

Dividing semantics from pragmatics in acquiring the modal vocabulary.

Linguistics

Contributor(s): Valentine Hacquard
Dates:

This chapter focuses on a special instance of logical vocabulary, namely modal words, like “might” or “must,” which express possibility and necessity. Modal statements involve a complex interplay of morphology, syntax, semantics, and pragmatics, which make it particularly challenging to identify what lexical meanings the modal words encode. This chapter surveys how possibilities and necessities are expressed in natural language, with an eye toward cross-linguistic similarity and variation, and introduces the framework that formal semantics inherits from modal logic to analyze modal statements. It then turns to the challenges—for both the semanticist and for the child learner—of figuring out the right division of labor between semantics and pragmatics for modal statements, and the exact lexical contributions of the modal words themselves.

Read More about Logic and the lexicon: Insights from modality

Children's use of syntax in word learning

How children use syntax as evidence for word meaning.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

This chapter investigates the role that syntax plays in guiding the acquisition of word meaning. It reviews data that reveal how children can use the syntactic distribution of a word as evidence for its meaning and discusses the principles of grammar that license such inferences. We delineate the role of thematic linking generalizations in the acquisition of action verbs, arguing that children use specific links between subject and agent and between object and patient to guide initial verb learning. In the domain of attitude verbs, we show that children’s knowledge of abstract links between subclasses of attitude verbs and their syntactic distribution enable learners to identify the meanings of their initial attitude verbs, such as think and want. Finally, we show that syntactic bootstrapping effects are not limited to verb learning but extend across the lexicon.

Read More about Children's use of syntax in word learning