Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Syntactic category constrains lexical access in speaking

When we choose which word to speak, do nouns and verbs compete, when the express similar concepts? New evidence says No: syntactic category plays a key role in limiting lexical access.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s):

Shota Momma (*16), Julia Buffinton, Bob Slevc

Dates:

We report two experiments that suggest that syntactic category plays a key role in limiting competition in lexical access in speaking. We introduce a novel sentence-picture interference (SPI) paradigm, and we show that nouns (e.g., running as a noun) do not compete with verbs (e.g., walking as a verb) and verbs do not compete with nouns in sentence production, regardless of their conceptual similarity. Based on this finding, we argue that lexical competition in production is limited by syntactic category. We also suggest that even complex words containing category-changing derivational morphology can be stored and accessed together with their final syntactic category information. We discuss the potential underlying mechanism and how it may enable us to speak relatively fluently.

Read More about Syntactic category constrains lexical access in speaking

Modeling the learning of the Person Case Constraint

Adam Liter and Naomi Feldman show that the Person Case Constraint can be learned on the basis of significantly less data, if the constraint is represented in terms of feature bundles.

Linguistics

Contributor(s): Adam Liter, Naomi Feldman
Dates:

Many domains of linguistic research posit feature bundles as an explanation for various phenomena. Such hypotheses are often evaluated on their simplicity (or parsimony). We take a complementary approach. Specifically, we evaluate different hypotheses about the representation of person features in syntax on the basis of their implications for learning the Person Case Constraint (PCC). The PCC refers to a phenomenon where certain combinations of clitics (pronominal bound morphemes) are disallowed with ditransitive verbs. We compare a simple theory of the PCC, where person features are represented as atomic units, to a feature-based theory of the PCC, where person features are represented as feature bundles. We use Bayesian modeling to compare these theories, using data based on realistic proportions of clitic combinations from child-directed speech. We find that both theories can learn the target grammar given enough data, but that the feature-based theory requires significantly less data, suggesting that developmental trajectories could provide insight into syntactic representations in this domain.

Hope for syntactic bootstrapping

Some mental state verbs take a finite clause as their object, while others take an infinitive, and the two groups differ reliably in meaning. Remarkably, children can use this correlation to narrow down the meaning of an unfamiliar verb.

Linguistics

Contributor(s): Valentine Hacquard, Jeffrey Lidz
Non-ARHU Contributor(s):

Kaitlyn Harrigan (*15)

Dates:
Publisher: Language

We explore children’s use of syntactic distribution in the acquisition of attitude verbs, such as think, want, and hope. Because attitude verbs refer to concepts that are opaque to observation but have syntactic distributions predictive of semantic properties, we hypothesize that syntax may serve as an important cue to learning their meanings. Using a novel methodology, we replicate previous literature showing an asymmetry between acquisition of think and want, and we additionally demonstrate that interpretation of a less frequent attitude verb, hope, patterns with type of syntactic complement. This supports the view that children treat syntactic frame as informative about an attitude verb’s meaning

Read More about Hope for syntactic bootstrapping

Morphology in Austronesian languages

Postdoc Ted Levin and Professor Maria Polinsky provide an overview of morphology in Austronesian languages.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s): Theodore Levin
Dates:
This is an overview of the major morphological properties of Austronesian languages. We present and analyze data that may bear on the commonly discussed lexical-category neutrality of Austronesian and suggest that Austronesian languages do differentiate between core lexical categories. We address the difference between roots and stems showing that Austronesian roots are more abstract than roots traditionally discussed in morphology. Austronesian derivation and inflexion rely on suffixation and prefixation; some infixation is also attested. Austronesian languages make extensive use of reduplication. In the verbal system, main morphological exponents mark voice distinctions as well as causatives and applicatives. In the nominal domain, the main morphological exponents include case markers, classifiers, and possession markers. Overall, verbal morphology is richer in Austronesian languages than nominal morphology. We also present a short overview of empirically and theoretically challenging issues in Austronesian morphology: the status of infixes and circumfixes, the difference between affixes and clitics, and the morphosyntactic characterization of voice morphology.

Read More about Morphology in Austronesian languages

Epistemic "might": A non-epistemic analysis

What are called epistemic use of "might" in fact express a relation, not to information or knowledge, as is routinely assumed, but to relevant circustances.

Linguistics

Non-ARHU Contributor(s):

Quinn Harr *19

Dates:

A speaker of (1) implies that she is uncertain whether (2), making this use of might “epistemic.” On the received view, the implication is semantic, but in this dissertation I argue that this implication is no more semantic than is the implication that a speaker of (2) believes John to be contagious.

(1) John might be contagious.

(2) John is contagious.

This follows from a new observation: unlike claims with explicitly epistemic locutions, those made with “epistemic” uses of might can be explained only with reference to non-epistemic facts. I conclude that they express a relation, not to relevant information, but instead to relevant circumstances, and that uncertainty is implied only because of how informed speakers contribute to conversations. This conclusion dissolves old puzzles about disagreements and reported beliefs involving propositions expressed with might, puzzles that have been hard for the received view to accommodate. The cost of these advantages is to explain why the circumstantial modality expressed by might is not inherently oriented towards the future, as has been claimed for other circumstantial modalities. But this claim turns out to be false. The correct characterization of the temporal differences reveals that the modality expressed by might relates  to propositions whereas other modalities relate to events. Neither sort is epistemic.

Read More about Epistemic "might": A non-epistemic analysis

Filler-gap dependency comprehension at 15 months: The role of vocabulary

New evidence from preferential looking suggests that 15 month olds can correctly understand wh-questions and relative clauses under certain experimental conditions, but perhaps only by noticing that a verb is missing an expected dependent.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Laurel Perkins (*19)

Dates:

15-month-olds behave as if they comprehend filler-gap dependencies such as wh-questions and relative clauses. On one hypothesis, this success does not reflect adult-like representations but rather a “gap-driven” interpretation heuristic based on verb knowledge. Infants who know that feed is transitive may notice that a predicted direct object is missing in Which monkey did the frog feed __? and then search the display for the animal that got fed. This gap-driven account predicts that 15-month-olds will perform accurately only if they know enough verbs to deploy this interpretation heuristic; therefore, performance should depend on vocabulary. We test this prediction in a preferential looking task and find corroborating evidence: Only 15-month-olds with higher vocabulary behave as if they comprehend wh-questions and relative clauses. This result reproduces the previous finding that 15-month-olds can identify the right answer for wh-questions and relative clauses under certain experimental contexts, and is moreover consistent with the gap-driven heuristic account for this behavior.

Read More about Filler-gap dependency comprehension at 15 months: The role of vocabulary

The agreement theta generalization

How does agreement between a head and a dependent relate to argument selection? Omer Preminger and Maria Polinsky observe a new restriction.

Linguistics

Contributor(s): Omer Preminger, Maria Polinsky
Dates:

In this paper, we propose a new generalization concerning the structural relationship between a head that agrees with a DP in φ-features and the predicate that assigns the (first) thematic role to that DP: the Agreement Theta Generalization (ATG). According to the ATG, configurations where the thematic-role assigner is located in a higher clause than the agreeing head are categorically excluded. We present empirical evidence for the ATG, discuss its analytical import, and show that this generalization bears directly on the proper modeling of syntactic agreement, as well as the prospects for reducing other syntactic (and syntacto-semantic) dependencies to the same underlying mechanism.

Read More about The agreement theta generalization

First conjunct agreement in Polish: Evidence for a mono-clausal analysis

In Polish a verb sometimes agrees with only the first member of a conjunct subject. Gesoel Mendes and visiting student Marta Ruda use verb-echo answers to show that this is not the result a biclausal structure with ellipsis.

Linguistics

Author/Lead: Gesoel Mendes
Non-ARHU Contributor(s): Gesoel Mendes, Marta Ruda
Dates:
In Polish a verb sometimes agrees with only the first member of a conjunct subject. Gesoel Mendes and visiting student Marta Ruda use verb-echo answers to show that this is not the result a biclausal structure with ellipsis.

Read More about First conjunct agreement in Polish: Evidence for a mono-clausal analysis

Understanding heritage languages

Maria Polinsky joins UC Irvine’s Gregory Scontras to “synthesize pertinent empirical observations and theoretical claims about vulnerable and robust areas of heritage language competence into early steps toward a model of heritage-language grammar.”

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s): Gregory Scontras
Dates:
With a growing interest in heritage languages from researchers of bilingualism and linguistic theory, the field of heritage-language studies has begun to build on its empirical foundations, moving toward a deeper understanding of the nature of language competence under unbalanced bilingualism. In furtherance of this trend, the current work synthesizes pertinent empirical observations and theoretical claims about vulnerable and robust areas of heritage language competence into early steps toward a model of heritage-language grammar. We highlight two key triggers for deviation from the relevant baseline: the quantity and quality of the input from which the heritage grammar is acquired, and the economy of online resources when operating in a less dominant language. In response to these triggers, we identify three outcomes of deviation in the heritage grammar: an avoidance of ambiguity, a resistance to irregularity, and a shrinking of structure. While we are still a ways away from a level of understanding that allows us to predict those aspects of heritage grammar that will be robust and those that will deviate from the relevant baselines, our hope is that the current work will spur the continued development of a predictive model of heritage language competence.

Read More about Understanding heritage languages

Field stations for linguistic research: A blueprint of a sustainable model

Professor Polinsky describes the advantages of field stations for linguistic fieldwork, and the implementation of the UMD station in Guatemala.

Linguistics

Contributor(s): Maria Polinsky
Dates:
There are often practical barriers to doing fieldwork in a novel, remote location. I propose a model for linguistic research designed to overcome such barriers: a linguistic field station. It is a centralized facility that coordinates scientific research by providing (i) research infrastructure, (ii) access to specific social, biological, or ecological systems that are not immediately available otherwise, (iii) training for students at the graduate and undergraduate levels, and (iv) access to local communities with the goal of obtaining data from them as well as training local specialists. Field stations are particularly important for research on and documentation of Indigenous languages, including contexts where colonial languages are supplanting Indigenous ones. Although the field station model is not new in research outside of language sciences, it has not yet been utilized widely in language research. I describe how the proposed model has been implemented in Guatemala and compare the field station there with other linguistic field stations.

Read More about Field stations for linguistic research: A blueprint of a sustainable model