Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

The role of incremental parsing in syntactically conditioned word learning

The girl is tapping with the tig. If you don't know what "tig" means, you'll look to what the girl is using to tap. And so will even very young children.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Aaron Steven White, Rebecca Baier
Dates:
In a series of three experiments, we use children’s noun learning as a probe into their syntactic knowledge as well as their ability to deploy this knowledge, investigating how the predictions children make about upcoming syntactic structure change as their knowledge changes. In the first two experiments, we show that children display a developmental change in their ability to use a noun’s syntactic environment as a cue to its meaning. We argue that this pattern arises from children’s reliance on their knowledge of verbs’ subcategorization frame frequencies to guide parsing, coupled with an inability to revise incremental parsing decisions. We show that this analysis is consistent with the syntactic distributions in child-directed speech. In the third experiment, we show that the change arises from predictions based on verbs’ subcategorization frame frequencies.

Read More about The role of incremental parsing in syntactically conditioned word learning

Looking forwards and backwards: The real-time processing of Strong and Weak Crossover

Dave, Jeff and Colin show that we can make rapid use of Principle C and c-command information to constrain retrieval of antecedents in online interpretation of pronouns.

Linguistics

Non-ARHU Contributor(s): Dave Kush
Dates:
We investigated the processing of pronouns in Strong and Weak Crossover constructions as a means of probing the extent to which the incremental parser can use syntactic information to guide antecedent retrieval. In Experiment 1 we show that the parser accesses a displaced wh-phrase as an antecedent for a pronoun when no grammatical constraints prohibit binding, but the parser ignores the same wh-phrase when it stands in a Strong Crossover relation to the pronoun. These results are consistent with two possibilities. First, the parser could apply Principle C at antecedent retrieval to exclude the wh-phrase on the basis of the c-command relation between its gap and the pronoun. Alternatively, retrieval might ignore any phrases that do not occupy an Argument position. Experiment 2 distinguished between these two possibilities by testing antecedent retrieval under Weak Crossover. In Weak Crossover binding of the pronoun is ruled out by the argument condition, but not Principle C. The results of Experiment 2 indicate that antecedent retrieval accesses matching wh-phrases in Weak Crossover configurations. On the basis of these findings we conclude that the parser can make rapid use of Principle C and c-command information to constrain retrieval. We discuss how our results support a view of antecedent retrieval that integrates inferences made over unseen syntactic structure into constraints on backward-looking processes like memory retrieval.

Read More about Looking forwards and backwards: The real-time processing of Strong and Weak Crossover

Split ergativity is not about ergativity

Split ergativity is an epiphenomenon, argues Maria Polinsky.

Linguistics

Contributor(s): Omer Preminger
Non-ARHU Contributor(s): Jessica Coon
Dates:
Publisher: Oxford University Press
This chapter argues that split ergativity is epiphenomenal, and that the factors which trigger its appearance are not limited to ergative systems in the first place. In both aspectual and person splits, the split is the result of a bifurcation of the clause into two distinct case/agreement domains, which renders the clause structurally intransitive. Since intransitive subjects do not appear with ergative marking, this straightforwardly accounts for the absence of ergative morphology. Crucially, such bifurcation is not specific to ergative languages; it is simply obfuscated in nominative-accusative environments because there, by definition, transitive and intransitive subjects pattern alike. The account also derives the universal directionality of splits, by linking the structure that is added to independent facts: the use of locative constructions in nonperfective aspects (Bybee et al. 1994, Laka 2006, Coon 2013), and the requirement that 1st/2nd person arguments be structurally licensed (Bejar & Rezac 2003, Baker 2008, 2011, Preminger 2011, 2014).

Antipassive

A handbook chapter on Antipassive constructions: intransitive clauses where an oblique dependent corresponds to the direct object in a transitive with the same verb.

Linguistics

Contributor(s): Maria Polinsky
Dates:
Publisher: Oxford University Press
A handbook chapter on Antipassive constructions: intransitive clauses where an oblique dependent corresponds to the direct object in a transitive with the same verb.

Ergativity and Austronesian-type voice systems

Austronesian voice systems are not the result of syntactic ergativity.

Linguistics

Non-ARHU Contributor(s): Michael Yoshitaka Erlewine, Theodore Levin, Coppe van Urk
Dates:
Publisher: Oxford University Press
Austronesian voice systems are not the result of syntactic ergativity.

Experimental approaches to ergative languages

A summary of major results in experimental work on ergative syntax, focussing on competition with accusative syntax, and the effects of ergativity on processing of long distance dependencies.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s): Nicholas Longenbaugh
Dates:
Publisher: Oxford University Press
A summary of major results in experimental work on ergative syntax, focussing on competition with accusative syntax, and the effects of ergativity on processing of long distance dependencies.

How can feature-sharing be asymmetric? Valuation as UNION over geometric feature structures

Valuation of features is neither overwriting nor sharing, but instead "union" of geometric structures.

Linguistics

Contributor(s): Omer Preminger
Dates:
Publisher: MITWPL
Valuation of features is neither overwriting nor sharing, but instead "union" of geometric structures.

The EPP is independent of Case: On illicit unaccusative incorporation

Languages with Noun Incorporation may forbid this with the single argument of an unaccusative. For this reason, says Ted Levin, the work of the EPP cannot be done instead by a requirement for Case.

Linguistics

Non-ARHU Contributor(s): Theodore Levin
Dates:
Publisher: MITWPL
Languages with Noun Incorporation may forbid this with the single argument of an unaccusative. For this reason, says Ted Levin, the work of the EPP cannot be done instead by a requirement for Case.

Cross-linguistic scope ambiguity: When two systems meet

Scope ambiguities permitted by most speakers of English are not permitted by those who are also heritage speakers of Mandarin, suggest Maria Polinsky and her collaborators.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s): Gregory Scontras, C.-Y. Edwin Tsai, Kenneth Mai
Dates:
Accurately recognizing and resolving ambiguity is a hallmark of linguistic ability. English is a language with scope ambiguities in doubly-quantified sentences like A shark ate every pirate; this sentence can either describe a scenario with a single shark eating all of the pirates, or a scenario with many sharks—a potentially-different one eating each pirate. In Mandarin Chinese, the corresponding sentence is unambiguous, as it can only describe the single-shark scenario. We present experimental evidence to this effect, comparing native speakers of English with native speakers of Mandarin in their interpretations of doubly-quantified sentences. Having demonstrated the difference between these two languages in their ability for inverse scope interpretations, we then probe the robustness of the grammar of scope by extending our experiments to English-dominant adult heritage speakers of Mandarin. Like native speakers of Mandarin, heritage Mandarin speakers lack inverse scope in Mandarin. Crucially, these speakers also lack inverse scope in English, their dominant language in adulthood. We interpret these results as evidence for the pressure to simplify the grammar of scope, decreasing ambiguity when possible. In other words, when two systems meet—as in the case of heritage speakers—the simpler system prevails.

Read More about Cross-linguistic scope ambiguity: When two systems meet

Learning an input filter for argument structure acquisition

How do children learn a verb’s argument structure when their input contains nonbasic clauses that obscure verb transitivity? Laurel Perkins shows that it might be enough for them to make a good guess about how likely they are to be wrong.

Linguistics

Non-ARHU Contributor(s): Laurel Perkins
Dates:
How do children learn a verb’s argument structure when their input contains nonbasic clauses that obscure verb transitivity? Here we present a new model that infers verb transitivity by learning to filter out non-basic clauses that were likely parsed in error. In simulations with childdirected speech, we show that this model accurately categorizes the majority of 50 frequent transitive, intransitive and alternating verbs, and jointly learns appropriate parameters for filtering parsing errors. Our model is thus able to filter out problematic data for verb learning without knowing in advance which data need to be filtered.