Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics.
Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.
A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.
Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution
How does online comprehension of adjunct control ("before eating") compare to resolution of pronominal anaphora ("before he ate")?
The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression.
Events in Semantics
Event Semantics says that clauses in natural languages are descriptions of events. Why believe this?
Event Semantics (ES) says that clauses in natural languages are descriptions of events. Why believe this? The answer cannot be that we use clauses to talk about events, or that events are important in ontology or psychology. Other sorts of things have the same properties, but no special role in semantics. The answer must be that this view helps to explain the semantics of natural languages. But then, what is it to explain the semantics of natural languages? Here there are many approaches, differing on whether natural languages are social and objective or individual and mental; whether the semantics delivers truth values at contexts or just constraints on truth-evaluable thoughts; which inferences it should explain as formally provable, if any; and which if any grammatical patterns it should explain directly. The argument for ES will differ accordingly, as will the consequences, for ontology, psychology, or linguistics, of its endorsement. In this chapter I trace the outlines of this story, sketching four distinct arguments for the analysis that ES makes possible: with it we can treat a dependent phrase and its syntactic host as separate predicates of related or identical events. Analysis of this kind allows us to state certain grammatical generalizations, formalize patterns of entailment, provide an extensional semantics for adverbs, and most importantly to derive certain sentence meanings that are not easily derived otherwise. But in addition, it will systematically validate inferences that are unsound, if we think conventionally about events and semantics. The moral is, with ES we cannot maintain both an ordinary metaphysics and a truth-conditional semantics that is simple. Those who would accept ES, and draw conclusions about the world or how we view it, must therefore choose which concession to make. I discuss four notable choices.
Transparency and language contact: The case of Haitian Creole, French, and Fongbe
Haitian Creole supports the hypothesis that language contact leads to more transparent relations between meaning and form.
When communicating speakers map meaning onto form. It would thus seem obvious for languages to show a one-to-one correspondence between meaning and form, but this is often not the case. This perfect mapping, i.e. transparency, is indeed continuously violated in natural languages, giving rise to zero-to-one, one-to-many, and many-to-one opaque correspondences between meaning and form. However, transparency is a mutating feature, which can be influenced by language contact. In this scenario languages tend to evolve and lose some of their opaque features, becoming more transparent. This study investigates transparency in a very specific contact situation, namely that of a creole, Haitian Creole, and its sub- and superstrate languages, Fongbe and French, within the Functional Discourse Grammar framework. We predict Haitian Creole to be more transparent than French and Fongbe and investigate twenty opacity features, divided into four categories, namely Redundancy (one-to-many), Fusion (many-to-one), Discontinuity (one meaning is split in two or more forms,) and Form-based Form (forms with no semantic counterpart: zero-to-one). The results indeed prove our prediction to be borne out: Haitian Creole only presents five opacity features out of twenty, while French presents nineteen and Fongbe nine. Furthermore, the opacity features of Haitian Creole are also present in the other two languages.
There is a simplicity bias when generalising from ambiguous data
How do phonological learners choose among generalizations of differing complexity?
How exactly do learners generalize in the face of ambiguous data? While there has been a substantial amount of research studying the biases that learners employ, there has been very little work on what sorts of biases are employed in the face of data that is ambiguous between phonological generalizations with different degrees of complexity. In this article, we present the results from three artificial language learning experiments that suggest that, at least for phonotactic sequence patterns, learners are able to keep track of multiple generalizations related to the same segmental co-occurrences; however, the generalizations they learn are only the simplest ones consistent with the data.
Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis
Experimental evidence supports an analysis of Null Object constructions in Korean as instances of object ellipsis.
Null object (NO) constructions in Korean and Japanese have receiveddifferent accounts: as (a) argument ellipsis (Oku 1998, S. Kim 1999, Saito 2007, Sakamoto 2015), (b) VP-ellipsis after verb raising (Otani and Whitman 1991, Funakoshi 2016), or (c) instances of base-generated pro (Park 1997, Hoji 1998, 2003). We report results from two experiments supporting the argument ellipsis analysis for Korean. Experiment 1 builds on K.-M. Kim and Han’s (2016) finding of interspeaker variation in whether the pronoun ku can be bound by a quantifier. Results showed that a speaker’s acceptance of quantifier-bound ku positively correlates with acceptance of sloppy readings in NO sentences. We argue that an ellipsis account, in which the NO site contains internal structure hosting the pronoun, accounts for this correlation. Experiment 2, testing the recovery of adverbials in NO sentences, showed that only the object (not the adverb) can be recovered in the NO site, excluding the possibility of VP-ellipsis. Taken together, our findings suggest that NOs result from argument ellipsis in Korean.
The structure of Polish numerically-quantified expressions
What is the syntax of "five witches" in Polish, with genitive on "witches", accusative on "five", and third-singular-neuter agreement on a verb? Paulina Lyskawa gives a new answer that manages to preserve ordinary theories of case and agreement.
Headedness and the Lexicon: The Case of Verb-to-Noun Ratios
Is there a correlation between the relative size of a lexical class, such as verbs in relation to nouns, and whether members of that class precede or follow a dependent in phrases they head? This paper finds that there is.
Enough time to get results? An ERP investigation of prediction with complex events
How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? This paper examines the question by comparing two kinds of compound verbs in Mandarin, and neural responses to the following direct object.
How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? We take advantage of the substantial differences in verb-argument structure provided by Mandarin, whose compound verbs encode complex event relations, such as resultatives (kid bit-broke lip: 'the kid bit his lip such that it broke') and coordinates (store owner hit-scolded employee 'the store owner hit and scolded an employee'). We tested sentences in which the object noun could be predicted on the basis of the preceding compound verb, and used N400 responses to the noun to index successful prediction. By varying the delay between verb and noun, we show that prediction is delayed in the resultative context (broken-BY-biting) relative to the coordinate one (hitting-AND-scolding). These results present a first step towards temporally dissociating the fine-grained subcomputations required to parse and interpret verb-argument relations.
Syntactic category constrains lexical access in speaking
When we choose which word to speak, do nouns and verbs compete, when the express similar concepts? New evidence says No: syntactic category plays a key role in limiting lexical access.
We report two experiments that suggest that syntactic category plays a key role in limiting competition in lexical access in speaking. We introduce a novel sentence-picture interference (SPI) paradigm, and we show that nouns (e.g., running as a noun) do not compete with verbs (e.g., walking as a verb) and verbs do not compete with nouns in sentence production, regardless of their conceptual similarity. Based on this finding, we argue that lexical competition in production is limited by syntactic category. We also suggest that even complex words containing category-changing derivational morphology can be stored and accessed together with their final syntactic category information. We discuss the potential underlying mechanism and how it may enable us to speak relatively fluently.
Modeling the learning of the Person Case Constraint
Adam Liter and Naomi Feldman show that the Person Case Constraint can be learned on the basis of significantly less data, if the constraint is represented in terms of feature bundles.
Many domains of linguistic research posit feature bundles as an explanation for various phenomena. Such hypotheses are often evaluated on their simplicity (or parsimony). We take a complementary approach. Specifically, we evaluate different hypotheses about the representation of person features in syntax on the basis of their implications for learning the Person Case Constraint (PCC). The PCC refers to a phenomenon where certain combinations of clitics (pronominal bound morphemes) are disallowed with ditransitive verbs. We compare a simple theory of the PCC, where person features are represented as atomic units, to a feature-based theory of the PCC, where person features are represented as feature bundles. We use Bayesian modeling to compare these theories, using data based on realistic proportions of clitic combinations from child-directed speech. We find that both theories can learn the target grammar given enough data, but that the feature-based theory requires significantly less data, suggesting that developmental trajectories could provide insight into syntactic representations in this domain.