Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics.
Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.
A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.
Social inference may guide early lexical learning
Assessment of knowledgeability and group membership influences infant word learning.
We incorporate social reasoning about groups of informants into a model of word learning, and show that the model accounts for infant looking behavior in tasks of both word learning and recognition. Simulation 1 models an experiment where 16-month-old infants saw familiar objects labeled either correctly or incorrectly, by either adults or audio talkers. Simulation 2 reinterprets puzzling data from the Switch task, an audiovisual habituation procedure wherein infants are tested on familiarized associations between novel objects and labels. Eight-month-olds outperform 14-month-olds on the Switch task when required to distinguish labels that are minimal pairs (e.g., “buk” and “puk”), but 14-month-olds' performance is improved by habituation stimuli featuring multiple talkers. Our modeling results support the hypothesis that beliefs about knowledgeability and group membership guide infant looking behavior in both tasks. These results show that social and linguistic development interact in non-trivial ways, and that social categorization findings in developmental psychology could have substantial implications for understanding linguistic development in realistic settings where talkers vary according to observable features correlated with social groupings, including linguistic, ethnic, and gendered groups.
Linguistic meanings as cognitive instructions
"More" and "most" do not encode the same sorts of comparison.
Natural languages like English connect pronunciations with meanings. Linguistic pronunciations can be described in ways that relate them to our motor system (e.g., to the movement of our lips and tongue). But how do linguistic meanings relate to our nonlinguistic cognitive systems? As a case study, we defend an explicit proposal about the meaning of most by comparing it to the closely related more: whereas more expresses a comparison between two independent subsets, most expresses a subset–superset comparison. Six experiments with adults and children demonstrate that these subtle differences between their meanings influence how participants organize and interrogate their visual world. In otherwise identical situations, changing the word from most to more affects preferences for picture–sentence matching (experiments 1–2), scene creation (experiments 3–4), memory for visual features (experiment 5), and accuracy on speeded truth judgments (experiment 6). These effects support the idea that the meanings of more and most are mental representations that provide detailed instructions to conceptual systems.
Japanese children's knowledge of the locality of "zibun" and "kare"
Initial errors in the acquisition of the Japanese local- or long-distance anaphor "zibun."
Although the Japanese reflexive zibun can be bound both locally and across clause boundaries, the third-person pronoun kare cannot take a local antecedent. These are properties that children need to learn about their language, but we show that the direct evidence of the binding possibilities of zibun is sparse and the evidence of kare is absent in speech to children, leading us to ask about children’s knowledge. We show that children, unlike adults, incorrectly reject the long-distance antecedent for zibun, and while being able to access this antecedent for a non-local pronoun kare, they consistently reject the local antecedent for this pronoun. These results suggest that children’s lack of matrix readings for zibun is not due to their understanding of discourse context but the properties of their language understanding.
Chain reduction via substitution: Evidence from Mayan
Extraction out of adjuncts in K'ichean languages shows that "overt traces" are possible.
We argue that deletion is not the only way that chain links created by A′-movement can be affected at PF. Chain links can also be substituted by a morpheme. This substitution delivers a linearizable output (in a manner parallel to deletion), creating overt “traces” on the surface. We demonstrate the virtues of our proposal through the empirical lens of adjunct extraction in two Mayan languages of the K’ichean branch: K’iche’ and Kaqchikel. In these languages, extraction of low adjuncts triggers the appearance of a verbal enclitic wi. The distribution of the enclitic upon long distance extraction shows that it must be analyzed as a surface reflex of substitution of a chain link. Our proposal provides evidence that movement proceeds successive cyclically and has two additional theoretical consequences: (i) C0 must be a phase head (contra den Dikken 2009; 2017), (ii) v0 cannot be a phase head (in line with Keine 2017).
Optional agreement in Santiago Tz'utujil (Mayan) is syntactic
Agreement is optional only for complements, and is conditioned by whether the argument is a DP or a reduced nominal.
Some Mayan languages display optional verbal agreement with 3pl arguments (Dayley 1985; Henderson 2009; England 2011). Focusing on novel data from Santiago Tz’utujil (ST), we demonstrate that this optionality is not reducible to phonological or morphological factors. Rather, the source of optionality is in the syntax. Specifcally, the distinction between arguments generated in the specifer position and arguments generated in the complement position governs the pattern. Only base-complements control agreement optionally; base-specifers control agreement obligatorily. We provide an analysis in which optional agreement results from the availability of two syntactic representations (DP vs. reduced nominal argument). Thus, while the syntactic operation Agree is deterministic, surface optionality arises when the operation targets two diferent sized goals.
Proxy Control: A new species of control in grammar
In German and Italian, 'Maria asked Bill to leave early' may be used to mean that Maria sought permission for people she represents. Aaron and Sandhya provide an analysis.
The control dependency in grammar is conventionally distinguished into two classes: exhaustive (i→i) and non-exhaustive (i→i + (j)). Here, we show that, in languages like German and Italian, some speakers allow a new kind of “proxy control” which differs from both, such that, for a controller i, and a controllee j, j = proxy(i). The proxy function picks out a set of individuals that is discourse-pragmatically related to i. For such speakers, the German/Italian proxy control equivalent of the sentence: “Mariai asked Billj (for permission) [PRO𝑝𝑟𝑜𝑥𝑦(𝑖)proxy(i) to leave work early]” would thus mean that Maria asked Bill for permission for some salient set of individuals related to herself to leave early. We examine the theoretical and empirical properties of this new control relation in detail, showing that it is irreducible to other, more familiar referential dependencies. Using standard empirical diagnostics, we then illustrate that proxy control can be instantiated both as a species of obligatory control (OC) and non-obligatory control (NOC) in German and Italian and develop a syntactic and semantic model that derives each and details the factors conditioning the choice between the two. We also investigate the factors that condition different degrees of exhaustiveness (exhaustive vs. partial vs. proxy) in control, which then sheds light on why proxy control obtains in some languages, but not others and, within a language, is possible for some speakers but not others.
Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution
How does online comprehension of adjunct control ("before eating") compare to resolution of pronominal anaphora ("before he ate")?
The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression.
Events in Semantics
Event Semantics says that clauses in natural languages are descriptions of events. Why believe this?
Event Semantics (ES) says that clauses in natural languages are descriptions of events. Why believe this? The answer cannot be that we use clauses to talk about events, or that events are important in ontology or psychology. Other sorts of things have the same properties, but no special role in semantics. The answer must be that this view helps to explain the semantics of natural languages. But then, what is it to explain the semantics of natural languages? Here there are many approaches, differing on whether natural languages are social and objective or individual and mental; whether the semantics delivers truth values at contexts or just constraints on truth-evaluable thoughts; which inferences it should explain as formally provable, if any; and which if any grammatical patterns it should explain directly. The argument for ES will differ accordingly, as will the consequences, for ontology, psychology, or linguistics, of its endorsement. In this chapter I trace the outlines of this story, sketching four distinct arguments for the analysis that ES makes possible: with it we can treat a dependent phrase and its syntactic host as separate predicates of related or identical events. Analysis of this kind allows us to state certain grammatical generalizations, formalize patterns of entailment, provide an extensional semantics for adverbs, and most importantly to derive certain sentence meanings that are not easily derived otherwise. But in addition, it will systematically validate inferences that are unsound, if we think conventionally about events and semantics. The moral is, with ES we cannot maintain both an ordinary metaphysics and a truth-conditional semantics that is simple. Those who would accept ES, and draw conclusions about the world or how we view it, must therefore choose which concession to make. I discuss four notable choices.
Transparency and language contact: The case of Haitian Creole, French, and Fongbe
Haitian Creole supports the hypothesis that language contact leads to more transparent relations between meaning and form.
When communicating speakers map meaning onto form. It would thus seem obvious for languages to show a one-to-one correspondence between meaning and form, but this is often not the case. This perfect mapping, i.e. transparency, is indeed continuously violated in natural languages, giving rise to zero-to-one, one-to-many, and many-to-one opaque correspondences between meaning and form. However, transparency is a mutating feature, which can be influenced by language contact. In this scenario languages tend to evolve and lose some of their opaque features, becoming more transparent. This study investigates transparency in a very specific contact situation, namely that of a creole, Haitian Creole, and its sub- and superstrate languages, Fongbe and French, within the Functional Discourse Grammar framework. We predict Haitian Creole to be more transparent than French and Fongbe and investigate twenty opacity features, divided into four categories, namely Redundancy (one-to-many), Fusion (many-to-one), Discontinuity (one meaning is split in two or more forms,) and Form-based Form (forms with no semantic counterpart: zero-to-one). The results indeed prove our prediction to be borne out: Haitian Creole only presents five opacity features out of twenty, while French presents nineteen and Fongbe nine. Furthermore, the opacity features of Haitian Creole are also present in the other two languages.
There is a simplicity bias when generalising from ambiguous data
How do phonological learners choose among generalizations of differing complexity?
How exactly do learners generalize in the face of ambiguous data? While there has been a substantial amount of research studying the biases that learners employ, there has been very little work on what sorts of biases are employed in the face of data that is ambiguous between phonological generalizations with different degrees of complexity. In this article, we present the results from three artificial language learning experiments that suggest that, at least for phonotactic sequence patterns, learners are able to keep track of multiple generalizations related to the same segmental co-occurrences; however, the generalizations they learn are only the simplest ones consistent with the data.