Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

English speaking pre-schoolers can use phrasal prosody for syntactic parsing

Phrasal prosody may give evidence of syntactic boundaries. UMD visitor Alex de Carvalho tested preschoolers ability to use such evidence.

Linguistics

Non-ARHU Contributor(s): Alex de Carvalho, Lyn Tieu, Anne Christophe
Dates:
This study tested American preschoolers’ ability to use phrasal prosody to constrain their syntactic analysis of locally ambiguous sentences containing noun/verb homophones (e.g., [The baby flies] [hide in the shadows] vs [The baby] [flies his kite], brackets indicate prosodic boundaries). The words following the homophone were masked, such that prosodic cues were the only disambiguating information. In an oral completion task, 4- to 5-year-olds successfully exploited the sentence’s prosodic structure to assign the appropriate syntactic category to the target word, mirroring previous results in French (but challenging previous English-language results) and providing cross-linguistic evidence for the role of phrasal prosody in children’s syntactic analysis.

Read More about English speaking pre-schoolers can use phrasal prosody for syntactic parsing

Modeling statistical insensitivity: Sources of suboptimal behavior

Children acquiring languages with noun classes (grammatical gender) do not use the available statistical information in an optimal, Bayesian way. Why not?

Linguistics

Non-ARHU Contributor(s): Annie Gagliardi
Dates:
Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the hypothesis that children are classifying nouns optimally with respect to a distribution that does not match the surface distribution of statistical features in their input. We propose three ways in which children's apparent statistical insensitivity might arise, and find that all three provide ways to account for the difference between children's behavior and the optimal classifier. A fourth model combines two of these proposals and finds that children's insensitivity is best modeled as a bias to ignore certain features during classification, rather than an inability to encode those features during learning. These results provide insight into children's developing knowledge of noun classes and highlight the complex ways in which statistical information from the input interacts with children's learning processes.

Read More about Modeling statistical insensitivity: Sources of suboptimal behavior

Deconstructing Ergativity: Two Types of Ergative Languages and Their Features

Maria Polinsky identifies two kinds of ergative languages: those where ergative subjects are prepositional phrases, and those where they are determiner phrases. She illustrates using her fieldwork on Tongan and Tsez.

Linguistics

Contributor(s): Maria Polinsky
Dates:
Publisher: Oxford University Press
Nominative-accusative and ergative are two common alignment types found across languages. In the former type, the subject of an intransitive verb and the subject of a transitive verb are expressed the same way, and differently from the object of a transitive. In ergative languages, the subject of an intransitive and the object of a transitive appear in the same form, the absolutive, and the transitive subject has a special, ergative, form. Ergative languages often follow very different patterns, thus evading a uniform description and analysis. A simple explanation for that has to do with the idea that ergative languages, much as their nominative-accusative counterparts, do not form a uniform class. In this book, Maria Polinsky argues that ergative languages instantiate two main types, the one where the ergative subject is a prepositional phrase (PP-ergatives) and the one with a noun-phrase ergative. Each type is internally consistent and is characterized by a set of well-defined properties. The book begins with an analysis of syntactic ergativity, which as Polinsky argues, is a manifestation of the PP-ergative type. Polinsky discusses diagnostic properties that define PPs in general and then goes to show that a subset of ergative expressions fit the profile of PPs. Several alternative analyses have been proposed to account for syntactic ergativity; the book presents and outlines these analyses and offers further considerations in support of the PP-ergativity approach. The book then discusses the second type, DP-ergative languages, and traces the diachronic connection between the two types. The book includes two chapters illustrating paradigm PP-ergative and DP-ergative languages: Tongan and Tsez. The data used in these descriptions come from Polinsky's original fieldwork hence presenting new empirical facts from both languages.

Successive-cyclic case assignment: Korean nominative-nominative case-stacking

Ted Levin uses stacking of nominative in Korean to argue for the Depedent Case model of Case.

Linguistics

Non-ARHU Contributor(s): Theodore Levin
Dates:
In recent literature, a debate has arisen between two theories of the calculation and realization of morphological case. The more commonly held Agree model states that all case features are assigned to nominals by nearby functional heads. Given a designated case-assigning functional head F, and a nominal α that is c-commanded by F, the case-marking associated with F will be assigned to α (Chomsky 2000, 2001). An alternative view, the Dependent Case model, holds that case is assigned to nominals given their structural relationship to one another. The case a nominal bears is dependent on the presence of other nominals within a defined domain (e.g. Yip et al. 1987; Marantz 1991; Bittner and Hale 1996). In this paper, I bring the phenomenon of Korean nominative-nominative case-stacking to bear on the current debate over Agree versus Dependent Case models. I argue that nominative-nominative stacking is incompatible with an Agree model of case-assignment. However, an emended Dependent Case model is well-suited to capture nominative-nominative case-stacking.

Read More about Successive-cyclic case assignment: Korean nominative-nominative case-stacking

Prediction as memory retrieval: Timing and mechanisms

We can use the meaning of an NP to predict an upcoming verb than we can the meaning of its grammatical relation. Why? Perhaps our memory for events does not represent them in terms of participant relations they have to specific kinds of objects.

Linguistics

Non-ARHU Contributor(s): Wing Yee Chow, Shota Momma, Cybelle Smith
Dates:
In our target article (Chow, W., Smith, C., Lau, E., & Phillips, C. (2015)), A “bag-of-arguments” mechanism for initial verb predictions, we investigated the predictions that comprehenders initially make about an upcoming verb as they read and provided evidence that they are sensitive to the arguments’ lexical meaning but not their structural roles. Here we synthesise findings from our work with other studies that show that verb predictions are sensitive to the arguments’ roles if more time is available for prediction. We content that prediction involves computations that may require differing amounts of time. Further, we argue that prediction can be usefully framed as a memory retrieval problem, linking prediction to independently wellunderstood memory mechanisms in language processing. We suggest that the delayed impact of argument roles on verb predictions may reflect a mismatch between the format of linguistic cues and target event memories. We clarify points of agreement and disagreement with the commentaries, and explain why memory access mechanisms can account for the time course of prediction.

NPI licensing and beyond: Children's knowledge of the semantics of "any"

Visitor Lyn Tieu and mentor Jeff Lidz investigate preschooler's understanding of negative polarity items like "any".

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Lyn Tieu
Dates:
This paper presents a study of preschool-aged children’s knowledge of the semantics of the negative polarity item (NPI) any. NPIs like any differ in distribution from non-polarity-sensitive indefinites like a: any is restricted to downward-entailing linguistic environments (Fauconnier 1975, 1979; Ladusaw 1979). But any also differs from plain indefinites in its semantic contribution; any can quantify over wider domains of quantification than plain indefinites. In fact, on certain accounts of NPI licensing, it is precisely the semantics of any that derives its restricted distribution (Kadmon & Landman 1993; Krifka 1995; Chierchia 2006, 2013). While previous acquisition studies have investigated children’s knowledge of the distributional constraints on any (O’Leary & Crain 1994; Thornton 1995; Xiang, Conroy, Lidz & Zukowski 2006; Tieu 2010), no previous study has targeted children’s knowledge of the semantics of the NPI. To address this gap in the existing literature, we present an experiment conducted with English-speaking adults and 4–5-year-old children, in which we compare their interpretation of sentences containing any with their interpretation of sentences containing the plain indefinite a and the bare plural. When presented with multiple domain alternatives, one of which was made more salient than the others, both adults and children restricted the domain of quantification for the plain indefinites to the salient subdomain. In the case of any, however, the adults and most of the children that we tested interpreted any as quantifying over the largest domain in the context. We discuss our findings in light of theories of NPI licensing that posit a connection between the distribution of NPIs and their underlying semantics, and conclude by raising further questions about the learnability of NPIs.

Read More about NPI licensing and beyond: Children's knowledge of the semantics of "any"

Discontinuous Development in the Acquisition of Filler-Gap Dependencies: Evidence from 15- and 20-Month-Olds

15-month-olds are able to understand relative clauses and wh-questions; but not by way of correctly representing their grammar.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Annie Gagliardi, Tara M. Mease
Dates:
This article investigates infant comprehension of filler-gap dependencies. Three experiments probe 15- and 20-month-olds’ comprehension of two filler-gap dependencies: wh-questions and relative clauses. Experiment 1 shows that both age groups appear to comprehend wh-questions. Experiment 2 shows that only the younger infants appear to comprehend relative clauses, while Experiment 3 shows that when parsing demands are reduced, older children can comprehend them as well. We argue that this discontinuous pattern follows from an offset in the development of grammatical knowledge and the deployment mechanisms for using that knowledge in real time. Fifteen-month-olds, we argue, lack the grammatical representation of filler-gap dependencies but are able to achieve correct performance in the task by using argument structure information. Twenty-month-olds do represent filler-gap dependencies but are inefficient in deploying those representations in real time.

Read More about Discontinuous Development in the Acquisition of Filler-Gap Dependencies: Evidence from 15- and 20-Month-Olds

Syntactic perturbation' during production activates the right IFG, but not Broca’s area or the ATL

Studies from postdoc William Matching which suggest, contra much previous work, that Broca’s area and the Anterior Temporal Lobe may not play a central role in syntactic processing.

Linguistics

Non-ARHU Contributor(s): William Matchin, Gregory Hickok
Dates:
Research on the neural organization of syntax – the core structure-building component of language – has focused on Broca’s area and the anterior temporal lobe (ATL) as the chief candidates for syntactic processing. However, these proposals have received considerable challenges. In order to better understand the neural basis of syntactic processing, we performed a functional magnetic resonance imaging experiment using a constrained sentence production task. We examined the BOLD response to sentence production for active and passive sentences, unstructured word lists, and syntactic perturbation. Perturbation involved cued restructuring of the planned syntax of a sentence mid utterance. Perturbation was designed to capture the effects of syntactic violations previously studied in sentence comprehension. Our experiment showed that Broca’s area and the ATL did not exhibit response profiles consistent with syntactic operations – we found no increase of activation in these areas for sentences > lists or for perturbation. Syntactic perturbation activated a cortical-subcortical network including robust activation of the right inferior frontal gyrus (RIFG). This network is similar to one previously shown to be involved in motor response inhibition. We hypothesize that RIFG activation in our study and in previous studies of sentence comprehension is due to an inhibition mechanism that may facilitate efficient syntactic restructuring.

Read More about Syntactic perturbation' during production activates the right IFG, but not Broca’s area or the ATL

Endogenous sources of variation in language acquisition

Jeff Lidz and collaborators investigate inter-speaker variation in the grammar of quantifier scope in Korean.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Chung-hye Han, Julien Musolino
Dates:
A fundamental question in the study of human language acquisition centers around apportioning explanatory force between the experience of the learner and the core knowledge that allows learners to represent that experience. We provide a previously unidentified kind of data identifying children’s contribution to language acquisition. We identify one aspect of grammar that varies unpredictably across a population of speakers of what is ostensibly a single language. We further demonstrate that the grammatical knowledge of parents and their children is independent. The combination of unpredictable variation and parent–child independence suggests that the relevant structural feature is supplied by each learner independent of experience with the language. This structural feature is abstract because it controls variation in more than one construction. The particular case we examine is the position of the verb in the clause structure of Korean. Because Korean is a head-final language, evidence for the syntactic position of the verb is both rare and indirect. We show that (i) Korean speakers exhibit substantial variability regarding this aspect of the grammar, (ii) this variability is attested between speakers but not within a speaker, (iii) this variability controls interpretation in two surface constructions, and (iv) it is independent in parents and children. According to our findings, when the exposure language is compatible with multiple grammars, learners acquire a single systematic grammar. Our observation that children and their parents vary independently suggests that the choice of grammar is driven in part by a process operating internal to individual learners.

Read More about Endogenous sources of variation in language acquisition

Syntactic and lexical inference in the acquisition of novel superlatives

Even four year olds are biased to think that determiners express relations between quantities, but lack the same bias for adjectives. How do they arrive at this bias?

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Alexis Wellwood, Annie Gagliardi
Dates:
Acquiring the correct meanings of words expressing quantities (seven, most) and qualities (red, spotty) present a challenge to learners. Understanding how children succeed at this requires understanding, not only of what kinds of data are available to them, but also the biases and expectations they bring to the learning task. The results of our word-learning task with 4-year-olds indicate that a “syntactic bootstrapping” hypothesis correctly predicts a bias toward quantity-based interpretations when a novel word appears in the syntactic position of a determiner but also leaves open the explanation of a bias towards quality-based interpretations when the same word is presented in the syntactic position of an adjective. We develop four computational models that differentially encode how lexical, conceptual, and perceptual factors could generate the latter bias. Simulation results suggest it results from a combination of lexical bias and perceptual encoding.

Read More about Syntactic and lexical inference in the acquisition of novel superlatives