Mayfest 2025 - Constraints on Meaning
When and why are certain meanings missing?
Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics.
Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.
A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.
This chapter focuses on a special instance of logical vocabulary, namely modal words, like “might” or “must,” which express possibility and necessity. Modal statements involve a complex interplay of morphology, syntax, semantics, and pragmatics, which make it particularly challenging to identify what lexical meanings the modal words encode. This chapter surveys how possibilities and necessities are expressed in natural language, with an eye toward cross-linguistic similarity and variation, and introduces the framework that formal semantics inherits from modal logic to analyze modal statements. It then turns to the challenges—for both the semanticist and for the child learner—of figuring out the right division of labor between semantics and pragmatics for modal statements, and the exact lexical contributions of the modal words themselves.
Read More about Logic and the lexicon: Insights from modality
This chapter investigates the role that syntax plays in guiding the acquisition of word meaning. It reviews data that reveal how children can use the syntactic distribution of a word as evidence for its meaning and discusses the principles of grammar that license such inferences. We delineate the role of thematic linking generalizations in the acquisition of action verbs, arguing that children use specific links between subject and agent and between object and patient to guide initial verb learning. In the domain of attitude verbs, we show that children’s knowledge of abstract links between subclasses of attitude verbs and their syntactic distribution enable learners to identify the meanings of their initial attitude verbs, such as think and want. Finally, we show that syntactic bootstrapping effects are not limited to verb learning but extend across the lexicon.
Attitude verbs like think and want describe mental states (belief and desire) that lack reliable physical correlates that could help children learn their meanings. Nevertheless, children succeed in doing so. For this reason, attitude verbs have been a parade case for syntactic bootstrapping. We assess a recent syntactic bootstrapping hypothesis, in which children assign belief semantics to verbs whose complement clauses morphosyntactically resemble the declarative main clauses of their language, while assigning desire semantics to verbs whose complement clauses do not. This hypothesis, building on the cross-linguistic generalization that belief complements have the morphosyntactic hallmarks of declarative main clauses, has been elaborated for languages with relatively rich morphosyntax. This article looks at Mandarin Chinese, whose null arguments and impoverished morphology mean that the differences necessary for syntactic bootstrapping might be much harder to detect. Our corpus analysis, however, shows that Mandarin belief complements have the profile of declarative main clauses, while desire complements do not. We also show that a computational implementation of this hypothesis can learn the right semantic contrasts between Mandarin and English belief and desire verbs, using morphosyntactic features in child-ambient speech. These results provide novel cross-linguistic support for this syntactic bootstrapping hypothesis.
Read More about Syntactic bootstrapping attitude verbs despite impoverished morphosyntax
The study offers novel evidence on the grammar and processing of clitic placement in heritage languages. Building on earlier findings of divergent clitic placement in heritage European Portuguese and Serbian, this study extends this line of inquiry to Bulgarian, a language where clitic placement is subject to strong prosodic constraints. We found that, in heritage Bulgarian, clitic placement is processed and rated differently than in the baseline, and we asked whether such clitic misplacement results from the transfer from the dominant language or follows from language-internal reanalysis. We used a self-paced listening task and an aural acceptability rating task with 13 English-dominant, highly proficient heritage speakers and 22 monolingual speakers of Bulgarian. Heritage speakers of Bulgarian process and rate the grammatical proclitic and ungrammatical enclitic clitic positions as equally acceptable, and we contend that this pattern is due to language-internal reanalysis. We suggest that the trigger for such reanalysis is the overgeneralization of the prosodic Strong Start Constraint from the left edge of the clause to any position in the sentence
I develop a general theory of focus and givenness that can account for truly contrastive focus, and for polarity focus, including data that are sometimes set apart under the label “verum focus”. I show that polarity focus creates challenges for classic theories of focus (e.g. Rooth 1992, a.o.) that can be dealt with by requiring that all focus marking is truly contrastive, and that givenness deaccenting imposes its own distinct requirement on prominence shifts. To enforce true contrast, I employ innocent exclusion (Fox 2007), which I suggest may impose a general filter on what counts as a valid alternative. A key, novel feature of my account is that focal targets are split into two kinds, those that are contextually supported and those that are constructed ad hoc, and that the presence of a contextually supported target can block the ability to construct an ad hoc target. This enables a novel explanation of the data motivating true contrast, and enables polarity focus to be brought into the fold of a unified and truly contrastive theory of focus. I then compare the account to theories of verum focus that make use of non-focus-based VERUM operators, and make the argument that the focus account is more parsimonious and has better empirical coverage.
Learning in any domain depends on how the data for learning are represented. In the domain of language acquisition, children’s representations of the speech they hear determine what generalizations they can draw about their target grammar. But these input representations change over development asa function of children’s developing linguistic knowledge, and may be incomplete or inaccurate when children lack the knowledge to parse their input veridically. How does learning succeed in the face of potentially misleading data? We address this issue using the case study of “non-basic” clauses inverb learning. A young infant hearing What did Amy fix? might not recognize that what stands in for the direct object of fix, and might think that fix is occurring without a direct object. We follow a previous proposal that children might filter nonbasic clauses out of the data for learning verb argument structure, but offer a new approach. Instead of assuming that children identify the data to filter ina dvance, we demonstrate computationally that it is possible for learners to infer a filter on their input without knowing which clauses are nonbasic. We instantiate a learner that considers the possibility that it misparses some of the sentences it hears, and learns to filter out those parsing errors in order to correctly infer transitivity for the majority of 50 frequent verbs in child-directed speech. Our learner offers a novel solution to the problem of learning from immature input representations: Learners maybe able to avoid drawing faulty inferences from misleading data by identifying a filter on their input,without knowing in advance what needs to be filtered.
Read More about The Power of Ignoring: Filtering Input for Argument Structure Acquisition
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in non-identical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.
Attitude verbs, such as think, want, and know, describe internal mental states that leave few cues as to their meanings in the physical world. Consequently, their acquisition requires learners to draw from indirect evidence stemming from the linguistic and conversational contexts in which they occur. This provides us a unique opportunity to probe the linguistic and cognitive abilities that children deploy in acquiring these words. Through a few case studies, we show how children make use of syntactic and pragmatic cues to figure out attitude verb meanings and how their successes, and even their mistakes, reveal remarkable conceptual, linguistic, and pragmatic sophistication.
Context guides comprehenders’ expectations during language processing, and information theoretic surprisal is commonly used as an index of cognitive processing effort. However, prior work using surprisal has considered only within-sentence context, using n-grams, neural language models, or syntactic structure as conditioning context. In this paper, we extend the surprisal approach to use broader topical context, investigating the influence of local and topical context on processing via an analysis of fMRI time courses collected during naturalistic listening. Lexical surprisal calculated from ngram and LSTM language models is used to capture effects of local context; to capture the effects of broader context a new metric based on topic models, topical surprisal, is introduced. We identify distinct patterns of neural activation for lexical surprisal and topical surprisal. These differing neuro-anatomical correlates suggest that local and broad contextual cues during sentence processing recruit different brain regions and that those regions of the language network functionally contribute to processing different dimensions of contextual information during comprehension. More generally, our approach adds to a growing literature using methods from computational linguistics to operationalize and test hypotheses about neuro-cognitive mechanisms in sentence processing.
The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants’ interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., *What did the chef burn the pizza). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.
Read More about Eighteen-month-old infants represent nonlocal syntactic dependencies