Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Events in Semantics

Event Semantics says that clauses in natural languages are descriptions of events. Why believe this?

Linguistics, Philosophy

Contributor(s): Alexander Williams
Dates:
Publisher: The Cambridge Handbook of the Philosophy of Language

Event Semantics (ES) says that clauses in natural languages are descriptions of events. Why believe this? The answer cannot be that we use clauses to talk about events, or that events are important in ontology or psychology. Other sorts of things have the same properties, but no special role in semantics. The answer must be that this view helps to explain the semantics of natural languages. But then, what is it to explain the semantics of natural languages? Here there are many approaches, differing on whether natural languages are social and objective or individual and mental; whether the semantics delivers truth values at contexts or just constraints on truth-evaluable thoughts; which inferences it should explain as formally provable, if any; and which if any grammatical patterns it should explain directly. The argument for ES will differ accordingly, as will the consequences, for ontology, psychology, or linguistics, of its endorsement. In this chapter I trace the outlines of this story, sketching four distinct arguments for the analysis that ES makes possible: with it we can treat a dependent phrase and its syntactic host as separate predicates of related or identical events. Analysis of this kind allows us to state certain grammatical generalizations, formalize patterns of entailment, provide an extensional semantics for adverbs, and most importantly to derive certain sentence meanings that are not easily derived otherwise. But in addition, it will systematically validate inferences that are unsound, if we think conventionally about events and semantics. The moral is, with ES we cannot maintain both an ordinary metaphysics and a truth-conditional semantics that is simple. Those who would accept ES, and draw conclusions about the world or how we view it, must therefore choose which concession to make. I discuss four notable choices.

Read More about Events in Semantics

Figuring out root and epistemic uses of modals: The role of input

How children use temporal orientation to infer which uses of modals are epistemic and which are not.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s):

Annemarie van Dooren *20, Anouk Dieuleveut *21, Ailís Cournane (NYU)

Dates:

This paper investigates how children figure out that modals like must can be used to express both epistemic and “root” (i.e. non epistemic) flavors. The existing acquisition literature shows that children produce modals with epistemic meanings up to a year later than with root meanings. We conducted a corpus study to examine how modality is expressed in speech to and by young children, to investigate the ways in which the linguistic input children hear may help or hinder them in uncovering the flavor flexibility of modals. Our results show that the way parents use modals may obscure the fact that they can express epistemic flavors: modals are very rarely used epistemically. Yet, children eventually figure it out; our results suggest that some do so even before age 3. To investigate how children pick up on epistemic flavors, we explore distributional cues that distinguish roots and epistemics. The semantic literature argues they differ in “temporal orientation” (Condoravdi, 2002): while epistemics can have present or past orientation, root modals tend to be constrained to future orientation (Werner 2006Klecha, 2016Rullmann & Matthewson, 2018). We show that in child-directed speech, this constraint is well-reflected in the distribution of aspectual features of roots and epistemics, but that the signal might be weak given the strong usage bias towards roots. We discuss (a) what these results imply for how children might acquire adult-like modal representations, and (b) possible learning paths towards adult-like modal representations.

Read More about Figuring out root and epistemic uses of modals: The role of input

Naturalistic speech supports distributional learning across contexts

Infants can learn what acoustic dimensions contrastive by attending to phonetic context.

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s):

Kasia Hitczenko *19

Dates:

At birth, infants discriminate most of the sounds of the world’s languages, but by age 1, infants become language-specific listeners. This has generally been taken as evidence that infants have learned which acoustic dimensions are contrastive, or useful for distinguishing among the sounds of their language(s), and have begun focusing primarily on those dimensions when perceiving speech. However, speech is highly variable, with different sounds overlapping substantially in their acoustics, and after decades of research, we still do not know what aspects of the speech signal allow infants to differentiate contrastive from noncontrastive dimensions. Here we show that infants could learn which acoustic dimensions of their language are contrastive, despite the high acoustic variability. Our account is based on the cross-linguistic fact that even sounds that overlap in their acoustics differ in the contexts they occur in. We predict that this should leave a signal that infants can pick up on and show that acoustic distributions indeed vary more by context along contrastive dimensions compared with noncontrastive dimensions. By establishing this difference, we provide a potential answer to how infants learn about sound contrasts, a question whose answer in natural learning environments has remained elusive.

Read More about Naturalistic speech supports distributional learning across contexts

Finding the force: How children discern possibility and necessity modals

How children discern possibility and necessity modals

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s):

Anouk Dieuleveut *21, Annemarie van Dooren *20, Ailís Counane (NYU)

Dates:

This paper investigates when and how children figure out the force of modals: that possibility modals (e.g., can/might) express possibility, and necessity modals (e.g., must/have to) express necessity. Modals raise a classic subset problem: given that necessity entails possibility, what prevents learners from hypothesizing possibility meanings for necessity modals? Three solutions to such subset problems can be found in the literature: the first is for learners to rely on downward-entailing (DE) environments (Gualmini and Schwarz in J. Semant. 26(2):185–215, 2009); the second is a bias for strong (here, necessity) meanings; the third is for learners to rely on pragmatic cues stemming from the conversational context (Dieuleveut et al. in Proceedings of the 2019 Amsterdam colloqnium, pp. 111–122, 2019a; Rasin and Aravind in Nat. Lang. Semant. 29:339–375, 2020). This paper assesses the viability of each of these solutions by examining the modals used in speech to and by 2-yearold children, through a combination of corpus studies and experiments testing the guessability of modal force based on their context of use. Our results suggest that, given the way modals are used in speech to children, the first solution is not viable and the second is unnecessary. Instead, we argue that the conversational context in which modals occur is highly informative as to their force and sufficient, in principle, to sidestep the subset problem. Our child results further suggest an early mastery of possibility—but not necessity—modals and show no evidence for a necessity bias.

Read More about Finding the force: How children discern possibility and necessity modals

Lexicalization in the developing parser

Children make syntactic predictions based on the syntactic distributions of specific verbs, but do not assume that the patterns can be generalized.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Aaron Steven White *15 (University of Rochester)

Dates:

We use children's noun learning as a probe into the nature of their syntactic prediction mechanism and the statistical knowledge on which that prediction mechanism is based. We focus on verb-based predictions, considering two possibilities: children's syntactic predictions might rely on distributional knowledge about specific verbs–i.e. they might be lexicalized–or they might rely on distributional knowledge that is general to all verbs. In an intermodal preferential looking experiment, we establish that verb-based predictions are lexicalized: children encode the syntactic distributions of specific verbs and use those distributions to make predictions, but they do not assume that these can be assumed of verbs in general.

Read More about Lexicalization in the developing parser

Semantics and Pragmatics in a Modular Mind

Is semantics a modular part of the mind?

Philosophy

Non-ARHU Contributor(s):

Michael McCourt *21

Dates:

This dissertation asks how we should understand the distinction between semantic and pragmatic aspects of linguistic understanding within the framework of mentalism, on which the study of language is a branch of psychology. In particular, I assess a proposal on which the distinction between semantics and pragmatics is ultimately grounded in the modularity or encapsulation of semantic processes. While pragmatic processes involved in understanding the communicative intentions of a speaker are non-modular and highly inferential, semantic processes involved in understanding the meaning of an expression are modular and encapsulated from top-down influences of general cognition. The encapsulation hypothesis for semantics is attractive, since it would allow the semantics-pragmatics distinction to cut a natural joint in the communicating mind. However, as I argue, the case in favor of the modularity hypothesis for semantics is not particularly strong. Many of the arguments offered in its support are unsuccessful. I therefore carefully assess the relevant experimental record, in rapport with parallel debates about modular processing in other domains, such as vision. I point to several observations that raise a challenge for the encapsulation hypothesis for semantics; and I recommend consideration of alternative notions of modularity. However, I also demonstrate some principled strategies that proponents of the encapsulation hypothesis might deploy in order to meet the empirical challenge that I raise. I conclude that the available data neither falsify nor support the modularity hypothesis for semantics, and accordingly I develop several strategies that might be pursued in future work. It has also been argued that the encapsulation of semantic processing would entail (or otherwise strongly recommend) a particular approach to word meaning. However, in rapport with the literature on polysemy—a phenomenon whereby a single word can be used to express several related concepts, but not due to generality—I show that such arguments are largely unsuccessful. Again, I develop strategies that might be used, going forward, to adjudicate among the options regarding word meaning within a mentalistic linguistics.

Read More about Semantics and Pragmatics in a Modular Mind

Logic and the lexicon: Insights from modality

Dividing semantics from pragmatics in acquiring the modal vocabulary.

Linguistics

Contributor(s): Valentine Hacquard
Dates:

This chapter focuses on a special instance of logical vocabulary, namely modal words, like “might” or “must,” which express possibility and necessity. Modal statements involve a complex interplay of morphology, syntax, semantics, and pragmatics, which make it particularly challenging to identify what lexical meanings the modal words encode. This chapter surveys how possibilities and necessities are expressed in natural language, with an eye toward cross-linguistic similarity and variation, and introduces the framework that formal semantics inherits from modal logic to analyze modal statements. It then turns to the challenges—for both the semanticist and for the child learner—of figuring out the right division of labor between semantics and pragmatics for modal statements, and the exact lexical contributions of the modal words themselves.

Read More about Logic and the lexicon: Insights from modality

Children's use of syntax in word learning

How children use syntax as evidence for word meaning.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

This chapter investigates the role that syntax plays in guiding the acquisition of word meaning. It reviews data that reveal how children can use the syntactic distribution of a word as evidence for its meaning and discusses the principles of grammar that license such inferences. We delineate the role of thematic linking generalizations in the acquisition of action verbs, arguing that children use specific links between subject and agent and between object and patient to guide initial verb learning. In the domain of attitude verbs, we show that children’s knowledge of abstract links between subclasses of attitude verbs and their syntactic distribution enable learners to identify the meanings of their initial attitude verbs, such as think and want. Finally, we show that syntactic bootstrapping effects are not limited to verb learning but extend across the lexicon.

Read More about Children's use of syntax in word learning

Syntactic bootstrapping attitude verbs despite impoverished morphosyntax

Even when acquiring Chinese, children assign belief semantics to verbs whose objects morphosyntactically resemble declarative main clauses, and desire semantics to others.

Linguistics

Contributor(s): Valentine Hacquard, Jeffrey Lidz
Non-ARHU Contributor(s):

Nick Huang *19, Aaron Steven White *15, Chia-Hsuan Liao *20

Dates:

Attitude verbs like think and want describe mental states (belief and desire) that lack reliable physical correlates that could help children learn their meanings. Nevertheless, children succeed in doing so. For this reason, attitude verbs have been a parade case for syntactic bootstrapping. We assess a recent syntactic bootstrapping hypothesis, in which children assign belief semantics to verbs whose complement clauses morphosyntactically resemble the declarative main clauses of their language, while assigning desire semantics to verbs whose complement clauses do not. This hypothesis, building on the cross-linguistic generalization that belief complements have the morphosyntactic hallmarks of declarative main clauses, has been elaborated for languages with relatively rich morphosyntax. This article looks at Mandarin Chinese, whose null arguments and impoverished morphology mean that the differences necessary for syntactic bootstrapping might be much harder to detect. Our corpus analysis, however, shows that Mandarin belief complements have the profile of declarative main clauses, while desire complements do not. We also show that a computational implementation of this hypothesis can learn the right semantic contrasts between Mandarin and English belief and desire verbs, using morphosyntactic features in child-ambient speech. These results provide novel cross-linguistic support for this syntactic bootstrapping hypothesis.

Read More about Syntactic bootstrapping attitude verbs despite impoverished morphosyntax

Language-Internal Reanalysis of Clitic Placement in Heritage Grammars Reduces the Cost of Computation: Evidence from Bulgarian

Heritage speakers of Bulgarian reanalyze the principles of clitic placement.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s):

Tanya Ivanova-Sullivan (New Mexico), Irinia A. Sekerina (CUNY), Davood Tofighi (New Mexico)

Dates:

The study offers novel evidence on the grammar and processing of clitic placement in heritage languages. Building on earlier findings of divergent clitic placement in heritage European Portuguese and Serbian, this study extends this line of inquiry to Bulgarian, a language where clitic placement is subject to strong prosodic constraints. We found that, in heritage Bulgarian, clitic placement is processed and rated differently than in the baseline, and we asked whether such clitic misplacement results from the transfer from the dominant language or follows from language-internal reanalysis. We used a self-paced listening task and an aural acceptability rating task with 13 English-dominant, highly proficient heritage speakers and 22 monolingual speakers of Bulgarian. Heritage speakers of Bulgarian process and rate the grammatical proclitic and ungrammatical enclitic clitic positions as equally acceptable, and we contend that this pattern is due to language-internal reanalysis. We suggest that the trigger for such reanalysis is the overgeneralization of the prosodic Strong Start Constraint from the left edge of the clause to any position in the sentence

Read More about Language-Internal Reanalysis of Clitic Placement in Heritage Grammars Reduces the Cost of Computation: Evidence from Bulgarian