Mayfest 2025 - Constraints on Meaning
Mayfest is a workshop that brings together researchers from a variety of disciplines and perspectives to discuss fundamental issues in linguistics.
In 2025, our Mayfest will ask about "Constraints on Meaning."
It appears that certain meanings are lexicalized in some languages, but not others, while other meanings are not lexicalized in any language. What are the universal gaps? We hope to consider this question with 'logical' vocabulary, such as quantifiers, connectives, modals, attitude predicates and focus operators. For observed universals, we can ask why they hold. Are they due to: constraints in the grammar; biases in acquisition; learnability considerations; processing restrictions; pragmatic principles; or other sources? In at least some cases, restrictions may result from how the semantics interacts with other components of grammar, or with language-external cognitive systems. This year our Mayfest brings together ten researchers who are studying constraints on meaning from a wide range of perspectives, both analytical and experimental:
Pranav Anand / Professor, University of California Santa Cruz
Tanya Bondarenko / Assistant Professor, Harvard University [unable to attend]
Alexandre Cremers / Researcher, Vilnius University
Rachel Dudley / Assistant Professor, University of California San Diego
Paloma Jeretic / Assistant Professor, University of Pennsylvania
Tyler Knowlton / Postdoctoral Fellow, University of Delaware
Viola Schmitt / Professor, Humboldt-University Berlin
Benjamin Spector / Research Director, CNRS Institut Jean Nicod
Wataru Uegaki / Reader, University of Edinburgh
Aaron Steven White / Associate Professor, University of Rochester
Program
Friday
- 8:45-9:30 - Breakfast & registration
- 9:30-9:45 - Welcome
- 9:45-10:45 - Pranav Anand (Santa Cruz) / English ‘coming-to-know’ predicates: evidence and knowledge [joint work with Natasha Korotkova]
- 10:45-10:55 - Short break
- 10:55-11:55 - Viola Schmitt (Humboldt/MIT) / What can modal operators do? [Joint work with Ido Benbaji-Elhadad]
- 11:55- 1:45 - Lunch / choose your own adventure
- 1:45-2:45 - Rachel Dudley (San Diego) / How children discover the difference between know and think
- 2:45-3:15 - Coffee break
- 3:15-4:15 - Wataru Uegaki (Edinburgh) / Lexicalization, compositionality, and communicative efficiency: The case of deontic priority
- 4:15-5:00 - Discussion
Saturday
- 8:30-9:30 - Breakfast & registration
- 9:30-10:30 - Tyler Knowlton (Delaware) / What psychosemantics tells us about constraints on meaning
- 10:30-11:00 - Coffee break
- 11:00-12:00 - Paloma Jeretic (Penn) / Universality in duality but not in universals’ anti-duality
- 12:00-1:00 - Aaron Steven White (Rochester) / Inducing lexical semantic generalizations
- 1:00-2:45 - Lunch
- 2:45-3:45 - Alexandre Cremers (Vilnius) / A linguistic explanation for Left-digit bias [Joint work with Julija Kalvelytė]
- 3:45-4:00 - Coffee break
- 4:00-5:00 - Benjamin Spector (CNRS Jean Nicod) / Three Strategies for Resolving Underspecification: Failure, Supervaluationism, Subvaluationism
- 5:00-5:45 - Discussion
Abstracts
Pranav Anand
This talk is a focused investigation of English ‘coming-to-know’ verbs, e.g., discover, find out, figure out, learn, notice and realize. Drawing on the literature on aspect and evidentiality, we argue that these verbs are culmination achievements that denote a change in doxastic state (from agnosticism to belief) and lexicalize the type of evidence that triggers that change. We propose that ‘coming-to-know’ verbs presuppose a complex eventuality of knowledge acquisition that is comprised of (i) an initial state of agnosticism, followed by (ii) evidence acquisition that triggers (iii) a process of deliberation that culminates in (iv) belief-change leading to (v) a new belief state. We explore how this structured multiplicity interacts with complementation, aspect, and the logic of intention.
[Joint work with Natasha Korotkova]
Viola Schmitt
The main aim of this talk is to raise and delimit two (connected) questions. In past work, I argued that worlds are set apart from other primitive objects via which we construct meaning in that we find no evidence for pluralities (Schmitt 2023, but see also Benbaji-Elhadad 2025, contra e.g. Schlenker 2006, Kriz 2018, Agha & Jeretic 2022). This raises two questions.
- (i) What prevents plurality formation of worlds?
A potential account which ties this gap to a lack of object language representations of worlds might seem intuitively plausible but will be highly stipulative: it is not underpinned by a predictive theory that would set apart possible operations on evaluation parameters from possible operations on denotations (see eg Cresswell 1990). Two other approaches seem more plausible, appealing to (a) a constraint on (lexicalized) intensional operators to only make use of equivalence classes of possible worlds, essentially reflecting the impossibility of individuating worlds via linguistic devices, or (b) the impossibility of determining the exact domain for any kind of operation on worlds. Both, however, raise question (ii):
- (ii) Why should we be able to quantify over possible worlds — given that the type of quantification usually associated with modals requires us to both determine the domain of quantification and to individuate its elements?
This is not only a conceptual worry but also an empirical one, as for all other objects via which we construct meaning (both primitive and functional) the availability of plurality formation and quantification seem to be correlated. I discuss two options of reanalyzing modal expressions without having to appeal to classical quantification that are based on the hypotheses (a) and (b), respectively. However, neither is fully satisfactory.
[Joint work with Ido Benbaji-Elhadad]
Rachel Dudley
Linguistic input is not directly intended to support language acquisition, but instead comes from ordinary conversations where speakers try to achieve practical goals. Despite this, children make rapid progress with word learning, managing to strip away layers of contextual information to discern subtle semantic distinctions. This talk examines how children discover the factivity contrast between know and think: know can only be fairly used when the speaker takes for granted the truth of the complement, while think has no such restriction. I'll present a line of work suggesting that the input does not directly reveal this factivity contrast, and furthermore that children may infer the distinction from indirect but correlated cues in the syntactic distributions of the two verbs & in the range of discourse moves that speakers use them to achieve. This more indirect pathway to the contrast is supported by principled connections across the interfaces, providing insight into the constraints on meaning that may guide language acquisition.
Wataru Uegaki
A number of cross-linguistically common patterns in semantics have been given accounts in terms of the notion of communicative efficiency, i.e., optimal trade-offs between cognitive cost and communicative accuracy (Kemp & Regier 2012; Regier et al. 2015; Kemp et al. 2018; Imel & Steinert-Threlkeld 2021; Steinert-Threlkeld 2021; Denić et al. 2022; Uegaki 2024). In this talk, we extend the analysis to a new empirical generalization concerning the lexicalization of impossibility modality, which we refer to as Deontic Priority (DP) (Uegaki, Mucha et al. 2024). The generalization can be stated as follows:
Deontic Priority: if a modal lexical item can express any impossibility, then it can express deontic impossibility.
We hypothesise an account of DP in terms of communicative efficiency. In particular, we suggest that the effect comes from optimising the pressure to reduce the cognitive cost of the system (by conveying meanings through lexical forms as opposed to compositional forms) and the pressure to communicate the flavours accurately (by reducing flavour ambiguity), crucially in the presence of a utility bias for deontic flavours. We report on experiments that are aimed to support this hypothesis: a rating experiment to ground the utility bias; and a series of dyadic/interactive artificial-language learning experiments (Kanwal et al. 2017) to examine whether the DP effect arises in the presence of the combined pressure from cognitive cost and communicative accuracy.
Tyler Knowlton
Recent work in psychosemantics has led to fine-grained proposals about the meanings of 'logical' expressions like "most" and "every". The proposed meanings are often counterintuitive and contradict standard views in semantic theorizing. I'll argue that they're also informative about constraints on meaning; in particular, they suggest constraints on the store of primitives out of which 'logical' meanings can be built. I'll consider two examples. First, I'll review the experimental evidence arguing that sentences like "most frogs are green" are understood by speakers in terms of cardinality subtraction ("the number of green frogs is greater than the total number of frogs minus the number of green ones"), not in terms of predicate negation ("the green frogs outnumber the non-green frogs"). This finding supports the idea that the vocabulary for mentally stating meanings lacks a notion predicate negation/set complementation altogether. This idea in turn could explain the fact that natural language seems to eschew predicate negation elsewhere, perhaps most notably exemplified by the missing corner in the square of opposition (i.e., languages lack quantifiers like "nevery" such that "nevery frog is green" means "some frogs are not green"). Second, I'll review the experimental evidence arguing that that sentences like "every frog is green" are understood by speakers in terms of applying a predicate to a restricted domain ("the frogs are such that 'green' applies universally") and not in terms of relating two independent sets ("the frogs are a subset of the green things"). This finding suggests that genuine set-theoretic operations like 'subset' are also absent from the store of primitives out of which meanings are built. If right, this idea could explain the well-known constraint that determiners have 'conservative' meanings, which is supported by both typological data and learnability experiments.
Paloma Jeretic
Chemla 2007 noticed French all’s anti-duality, despite the lack of a word for both in the language. In Jeretič et al. 2024, we show that this can be explained by the presence of a core concept DUAL, which supports competition with an indirect alternative les deux (‘the two’). A challenge to this analysis comes from a new observation from Ecuadorian Siona, whose word for all is not anti-dual, i.e. it can be used to refer to a domain of two individuals. I explain this puzzle away on the basis of the fact that there is no appropriate ‘indirect alternative’, allowing to maintain the claim that duality is universally present in language.
Aaron Steven White
The development of observationally adequate lexical semantic generalizations is a crucial step in positing descriptively adequate lexical semantic constraints–and ultimately, in developing explanatorily adequate lexical semantic theories. In developing generalizations about open class items in particular, even observational adequacy can be difficult to achieve not only due to the sheer size of the class to which the generalizations apply but also due to the ways in which we measure the distributional and inferential properties from which lexical semantic generalizations are constructed.
I present a modular framework that aims to support the development of observationally adequate lexical semantic generalizations. As a proof-of-concept, I deploy this framework to induce lexical semantic generalizations about clause-embedding predicates from the MegaAttitude datasets. I discuss the lexical semantic generalizations this study uncovers, and I discuss how this framework might be used to evaluate the descriptive adequacy of lexical semantic constraints through explicit quantitative model comparison.
Alexandre Cremers
The left-digit bias (LDB) is a well-known effect whereby a price difference of $2.99 to $3.00 feels much bigger than $3.00 to $3.01. While various explanations have been proposed in the psychology, economics and marketing literatures, this effect hasn't received as much attention from linguistics. Using psycholinguistic methods, we show that (i) the effect occurs across a wide variety of scales beyond prices, (ii) it is sensitive to contextually salient numbers rather than changes in left-digit, and (iii) it doesn't seem to be sensitive to valence. We put forward a new linguistic explanation of the LDB based on linguistic or language-of-thought biases, and provide preliminary cross-linguistic evidence for our assumptions from the distribution of numeral modifiers in a few indo-European languages.
[Joint work with Julija Kalvelytė]
Benjamin Spector
I explore the view that there are three natural strategies for resolving underspecification in language, and that recognizing them helps explain the ubiquity of certain projection patterns—especially those discussed in (c) below.
- (a) Requiring resolution by context. This accounts for the uniqueness presuppositions of pronouns and singular definites, while also explaining exceptions when contextual resolution is available.
- (b) Relying on the listener to infer that at least one construal is true. This yields existential interpretations, as seen in weak readings of donkey anaphora and potentially specific indefinites. The underlying logic is subvaluationism.
- (c) Using a sentence only when there is no risk whatsoever of misleading the listener—i.e., when all relevant construals are true. This results in strong/universal readings, such as those found in strong readings of donkey anaphora, homogeneity effects, and (as I will argue) the projection behavior of scalar implicatures and exhaustivity. The underlying logic for this 'mode of resolution' is supervaluationism.
I will probably mostly focus on (c), particularly homogeneity and exhaustivity. I will show that when supervaluationism is embedded within a probabilistic model of pragmatics (RSA), it yields further explanatory advantages.