Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Rebels without a clause: Processing reflexives in fronted wh-predicates

In two eye-tracking experiments, Akira Omaki and Brian Dillon find that readers initially interpret a cataphoric reflexive anaphorically, and tend to associate the reflexive with a recently preceding antecedent.

Linguistics

Non-ARHU Contributor(s):

Akira Omaki (*10), Anthony Yacovone, Zoe Ovans (HESP), Brian Dillon (*11)

Dates:

English reflexives like herself tend to associate with a structurally prominent local antecedent in online processing. However, past work has primarily investigated reflexives in canonical direct object positions. The present study investigates cataphoric reflexives in fronted wh-predicates (e.g., The mechanic that James hired predicted how annoyed with himself the insurance agent would be). Here, the reflexive is encountered in advance of its grammatical antecedent. We ask two questions. First, will readers engage an anaphoric (backwards-looking) or cataphoric (forwards-looking) search for an antecedent? Two, how similar is this process to the retrieval process for direct object reflexives? In two eye-tracking experiments, we found that readers initially interpret a cataphoric reflexive anaphorically, and tend to associate the reflexive with the more recently encountered antecedent. We propose that structural guidance for reflexive resolution occurs only when the necessary configurational syntactic information is available when the reflexive is encountered.

Read More about Rebels without a clause: Processing reflexives in fronted wh-predicates

Distinctions between primary and secondary scalar implicatures

New evidence that only some scalar inferences have a Gricean explanation, while others are conventional.

Linguistics

Contributor(s): Anouk Dieuleveut
Non-ARHU Contributor(s): Anouk Dieuleveut, Benjamin Spector, Emmanuel Chemla
Dates:
An utterance of Some of the students are home usually triggers the inference that it is not the case that the speaker believes that all students are home (Primary Scalar Implicature). It may also license a stronger inference: that the speaker believes that not all students are home (Secondary Scalar Implicature). Using an experimental paradigm which allows to distinguish between these three distinct readings as such (literal reading, primary SI, secondary SI), we show that the secondary SI can be accessed even in contexts where the speaker is not presented as being well-informed, a result which goes against classical neo-Gricean pragmatic approaches to Scalar Implicature, but is compatible with both the ‘grammatical’ approach to Scalar Implicatures and some more recent game-theoretic pragmatic models in which speakers and listeners engage in sophisticated higher-order reasoning about each other. Second, we use this paradigm to compare standard scalar items such as some and expressions whose interpretation has been argued to involve SIs, but controversially: almost, numerals and plural morphology. For some and almost, we find that speakers do access three distinct readings, but for numerals and plural morphology, only the literal reading and the secondary implicature could be detected, and no primary implicature, which suggests that the pragmatic and semantic mechanisms at play are different for both types of items.

Read More about Distinctions between primary and secondary scalar implicatures

Antecedent access mechanisms in pronoun processing: Evidence from the N400

Lexical decisions to a word after a pronoun are facilitated when it is semantically related to the pronoun’s antecedent. These priming effects may depend not on automatic spreading activation, but on the extent to which the relevant word is predicted.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s):

Sol Lago (*14), Anna Namyst, Lena Jäger

Dates:

Previous cross-modal priming studies showed that lexical decisions to words after a pronoun were facilitated when these words were semantically related to the pronoun’s antecedent. These studies suggested that semantic priming effectively measured antecedent retrieval during coreference. We examined whether these effects extended to implicit reading comprehension using the N400 response. The results of three experiments did not yield strong evidence of semantic facilitation due to coreference. Further, the comparison with two additional experiments showed that N400 facilitation effects were reduced in sentences (vs. word pair paradigms) and were modulated by the case morphology of the prime word. We propose that priming effects in cross-modal experiments may have resulted from task-related strategies. More generally, the impact of sentence context and morphological information on priming effects suggests that they may depend on the extent to which the upcoming input is predicted, rather than automatic spreading activation between semantically related words.

Read More about Antecedent access mechanisms in pronoun processing: Evidence from the N400

Learning, memory and syntactic bootstrapping: A meditation

Do children learning words rely on memories for where they have heard the word before? Jeff Lidz argues memory of syntactic context plays a larger role than memory for referential context.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:
Lila Gleitman’s body of work on word learning raises an apparent paradox. Whereas work on syntactic bootstrapping depends on learners retaining information about the set of distributional contexts that a word occurs in, work on identifying a word’s referent suggests that learners do not retain information about the set of extralinguistic contexts that a word occurs in. I argue that this asymmetry derives from the architecture of the language faculty. Learners expect words with similar meanings to have similar distributions, and so learning depends on a memory for syntactic environments. The referential context in which a word is used is less constrained and hence contributes less to the memories that drive word learning.

Read More about Learning, memory and syntactic bootstrapping: A meditation

Same words, different structures: An fMRI investigation of argument relations and the angular gyrus

fMRI research has implicated the angular gyrus of the left hemisphere in the computation of event concepts. Might its role be more specifically the computation of argument structure, a specifically linguistic relation?

Linguistics

Non-ARHU Contributor(s): William Matchin
Dates:
In fMRI, increased activation for combinatorial syntactic and semantic processing is typically observed in a set of left hemisphere brain areas: the angular gyrus (AG), the anterior temporal lobe (ATL), the posterior superior temporal sulcus (pSTS), and the inferior frontal gyrus (IFG). Recent work has suggested that semantic combination is supported by the ATL and the AG, with a division of labor in which AG is involved in event concepts and ATL is involved in encoding conceptual features of entities and/or more general forms of semantic combination. The current fMRI study was designed to refine hypotheses about the angular gyrus processes in question. In particular, we ask whether the AG supports the computation of argument structure (a linguistic notion that depends on a verb taking other phrases as arguments) or the computation of event concepts more broadly. To distinguish these possibilities we used a novel, lexically-matched contrast: noun phrases (NP) (the frightened boy) and verb phrases (VP) (frightened the boy), where VPs contained argument structure, denoting an event and assigning a thematic role to its argument, and NPs did not, denoting only a semantically enriched entity. Results showed that while many regions showed increased activity for NPs and VPs relative to unstructured word lists (AG, ATL, pSTS, anterior IFG), replicating evidence of their involvement in combinatorial processing, neither AG or ATL showed differences in activation between the VP and NP conditions. These results suggest that increased AG activity does not reflect the computation of argument structure per se, but are compatible with a view in which the AG represents event information denoted by words such as frightened independent of their grammatical context. By contrast, pSTS and posterior IFG did show increased activation for the VPs relative to NPs. We suggest that these effects may reflect differences in syntactic processing and working memory engaged by different structural relations.

Read More about Same words, different structures: An fMRI investigation of argument relations and the angular gyrus

Prosody and Function Words Cue the Acquisition of Word Meanings in 18-Month-Old Infants

18-month-old infants use prosody and function words to recover the syntactic structure of a sentence, which in turn constrains the possible meanings of novel words the sentence contains.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Angela Xiaoxue He (*15), Alex de Carvalho, Anne Christophe

Dates:

Language acquisition presents a formidable task for infants, for whom word learning is a crucial yet challenging step. Syntax (the rules for combining words into sentences) has been robustly shown to be a cue to word meaning. But how can infants access syntactic information when they are still acquiring the meanings of words? We investigated the contribution of two cues that may help infants break into the syntax and give a boost to their lexical acquisition: phrasal prosody (speech melody) and function words, both of which are accessible early in life and correlate with syntactic structure in the world’s languages. We show that 18-month-old infants use prosody and function words to recover sentences’ syntactic structure, which in turn constrains the possible meanings of novel words: Participants (N = 48 in each of two experiments) interpreted a novel word as referring to either an object or an action, given its position within the prosodic-syntactic structure of sentences.

Read More about Prosody and Function Words Cue the Acquisition of Word Meanings in 18-Month-Old Infants

What the PCC tells us about “abstract” agreement, head movement, and locality

Agreement in Person, Number or Noun Class features is always overtly realized, in some part of the paradigm, and is never fully "abstract".

Linguistics

Contributor(s): Omer Preminger
Dates:

Based on the cross- and intra-linguistic distribution of Person Case Constraint (PCC) effects, this paper shows that there can be no agreement in ϕ-features (PERSON, NUMBER, GENDER/NOUN-CLASS) which systematically lacks a morpho-phonological footprint. That is, there is no such thing as “abstract” ϕ-agreement, null across the entire paradigm. Applying the same diagnostic to instances of clitic doubling, we see that these do involve syntactic agreement. This cannot be because clitic doubling is agreement; it behaves like movement (and unlike agreement) in a variety of respects. Nor can this be because clitic doubling, qua movement, is contingent on prior agreement—since the claim that all movement depends on prior agreement is demonstrably false. Clitic doubling requires prior agreement because it is an instance of non-local head movement, and movement of X0 to Y0 always requires a prior syntactic relationship between Y0 and XP. In local head movement (the kind that is already permitted under the Head Movement Constraint), this requirement is trivially satisfied by (c-)selection. But in non-local cases, agreement must fill this role.

Read More about What the PCC tells us about “abstract” agreement, head movement, and locality

Ellipsis in Transformational Grammar

Ellipsis is deletion.

Linguistics

Contributor(s): Howard Lasnik
Non-ARHU Contributor(s):

Kenshi Funakoshi (*14)

Dates:
Publisher: Oxford University Press

This chapter examines three themes concerning ellipsis that have been extensively discussed in transformational generative grammar: structure, recoverability, and licensing. It reviews arguments in favor of the analysis according to which the ellipsis site is syntactically fully represented, and compares the two variants of this analysis (the deletion analysis and the LF-copying analysis). It is concluded that the deletion analysis is superior to the LF-copying analysis. A discussion of recoverability follows, which concludes that in order for elided material to be recoverable, a semantic identity condition must be satisfied, but that is not a sufficient condition: syntactic or formal identity must be taken into account. The chapter finally considers licensing. It reviews some proposals in the literature about what properties of licensing heads and what local relation between the ellipsis site and the licensing head are relevant to ellipsis licensing.

Read More about Ellipsis in Transformational Grammar

Control complements in Mandarin Chinese: Implications for restructuring and the Chinese finiteness debate

Mandarin data suggest that restructuring with a Control verb is possible even when the verb's complement is a full size clause.

Linguistics

Non-ARHU Contributor(s):

Nick Huang (*19)

Dates:

Many proposals on restructuring suggest that restructuring phenomena are only observed when a control predicate takes as a complement a functional projection smaller than a clause. In this paper, I present novel Mandarin data against recent proposals that restructuring control predicates cannot take clausal complements and the related generalization that clausal complements always block restructuring phenomena. An alternative account of the Mandarin data is presented. The data also bear on the question of whether a finiteness distinction exists in Chinese. In particular, they provide clearer evidence that control predicates can take clausal complements that differ syntactically from those of non-control attitude predicates. This difference parallels the cross-linguistic correlation between control predicates and non-finite clausal complements and lends new support for the claim that Chinese makes a finiteness distinction.

Read More about Control complements in Mandarin Chinese: Implications for restructuring and the Chinese finiteness debate

The importance of input representations

Learning from data is not incompatible with approaches that attribute rich initial linguistic knowledge to the learner. On the contrary, such approaches must still account for how knowledge guides learners in using their data to infer a grammar.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Laurel Perkins (*19)

Dates:

Language learners use the data in their environment in order to infer the grammatical system that produced that data. Yang (2018) makes the important point that this process requires integrating learners’ experiences with their current linguistic knowledge. A complete theory of language acquisition must explain how learners leverage their developing knowledge in order to draw further inferences on the basis of new data. As Yang and others have argued, the fact that input plays a role in learning is orthogonal to the question of whether language acquisition is primarily knowledge-driven or data-driven (J. A. Fodor, 1966; Lidz & Gagliardi, 2015; Lightfoot, 1991; Wexler & Culicover, 1980). Learning from data is not incompatible with approaches that attribute rich initial linguistic knowledge to the learner. On the contrary, such approaches must still account for how knowledge guides learners in using their data to infer a grammar.