Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Automatic semantic facilitation in anterior temporal cortex revealed through multimodal neuroimaging

Bottom-up effects of context on semantic memory, plumbed by a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Alexandre Gramfort, Matti Hamalainen, Gina Kuperberg
Dates:
A core property of human semantic processing is the rapid, facilitatory influence of prior input on extracting the meaning of what comes next, even under conditions of minimal awareness. Previous work has shown a number of neurophysiological indices of this facilitation, but the mapping between time course and localization— critical for separating automatic semantic facilitation from other mechanisms—has thus far been unclear. In the current study, we used a multimodal imaging approach to isolate early, bottom-up effects of context on semantic memory, acquiring a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals with a masked semantic priming paradigm. Across techniques, the results provide a strikingly convergent picture of early automatic semantic facilitation. Event-related potentials demonstrated early sensitivity to semantic association between 300 and 500 ms; MEG localized the differential neural response within this time window to the left anterior temporal cortex, and fMRI localized the effect more precisely to the left anterior superior temporal gyrus, a region previously implicated in semantic associative processing. However, fMRI diverged from early EEG/MEG measures in revealing semantic enhancement effects within frontal and parietal regions, perhaps reflecting downstream attempts to consciously access the semantic features of the masked prime. Together, these results provide strong evidence that automatic associative semantic facilitation is realized as reduced activity within the left anterior superior temporal cortex between 300 and 500 ms after a word is presented, and emphasize the importance of multimodal neuroimaging approaches in distinguishing the contributions of multiple regions to semantic processing

Is she patting Katie? Constraints on pronominal reference in 30-month-olds

Preferential looking studies show that, already at 30 months, children's understanding of pronouns in "Katie patted herself" and "She patted Katie" are already adult-like.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Cynthia Lukyanenko,
Dates:
In this study we investigate young children’s knowledge of syntactic constraints on Noun Phrase reference by testing 30-month-olds’ interpretation of two types of transitive sentences. In a preferential looking task, we find that children prefer different interpretations for transitive sentences whose object NP is a name (e.g., She’s patting Katie) as compared with those whose object NP is a reflexive pronoun (e.g., She’s patting herself). They map the former onto an other-directed event (one girl patting another) and the latter onto a self-directed event (one girl patting her own head). These preferences are carried by high-vocabulary children in the sample and suggest that 30-month-olds have begun to distinguish among different types of transitive sentences. Children’s adult-like interpretations are consistent with adherence to Principles A and C of Binding Theory and suggest that further research using the preferential looking procedure to investigate young children’s knowledge of syntactic constraints may be fruitful.

Read More about Is she patting Katie? Constraints on pronominal reference in 30-month-olds

The psycholinguistics of ellipsis

"I read this and so should you" - a review of psycholinguistic work on the grammatical representation of ellipsis.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Dan Parker
Dates:
This article reviews studies that have used experimental methods from psycholinguistics to address questions about the representation of sentences involving ellipsis. Accounts of the structure of ellipsis can be classified based on three choice points in a decision tree. First: does the identity constraint between antecedents and ellipsis sites apply to syntactic or semantic representations? Second: does the ellipsis site contain a phonologically null copy of the structure of the antecedent, or does it contain a pronoun or pointer that lacks internal structure? Third: if there is unpronounced structure at the ellipsis site, does that structure participate in all syntactic processes, or does it behave as if it is genuinely absent at some levels of syntactic representation? Experimental studies on ellipsis have begun to address the first two of these questions, but they are unlikely to provide insights on the third question, since the theoretical contrasts do not clearly map onto timing predictions. Some of the findings that are emerging in studies on ellipsis resemble findings from earlier studies on other syntactic dependencies involving wh-movement or anaphora. Care should be taken to avoid drawing conclusions from experiments about ellipsis that are known to be unwarranted in experiments about these other dependencies.

Epistemics and Attitudes

Epistemic modals are natural in the complements of some attitude verbs but not others. Valentine Hacquard and Pranav Anand describe the pattern.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s): Pranav Anand
Dates:
This paper investigates the distribution of epistemic modals in attitude contexts in three Romance languages, as well as their potential interaction with mood selection. We show that epistemics can appear in complements of attitudes of acceptance (Stalnaker 1984), but not desideratives or directives; in addition, emotive doxastics (hope, fear) and dubitatives (doubt) permit epistemic possibility modals, but not their necessity counterparts. We argue that the embedding differences across attitudes indicate that epistemics are sensitive to the type of attitude an attitude predicate reports. We show that this sensitivity can be derived by adopting two types of proposals from the literature on epistemic modality and on attitude verbs: First, we assume that epistemics do not target knowledge uniformly, but rather quantify over an information state determined by the content of the embedding attitude (Hacquard 2006, 2010, Yalcin 2007). In turn, we adopt a fundamental split in the semantics of attitude verbs between ‘representational’ and ‘non-representational’ attitudes (Bolinger 1968): representational attitudes quantify over an information state (e.g., a set of beliefs for believe), which, we argue, epistemic modals can be anaphoric to. Non-representational attitudes do not quantify over an information state; instead, they combine with their complement via a comparison with contextually-provided alternatives using a logic of preference (cf. Bolinger 1968, Stalnaker 1984, Farkas 1985, Heim 1992, Villalta 2000, 2008). Finally, we argue that emotive doxastics and dubitatives have a hybrid semantics, which combines a representational component (responsible for licensing epistemic possibility modals), and a preference component (responsible for disallowing epistemic necessity modals).

Read More about Epistemics and Attitudes

Part and parcel of eliding partitives

"Ten people walked in an many sat down." Michaël Gagnon argues that "many" here is underlyingly "many of them" and not "many people" as is more commonly assumed.

Linguistics

Non-ARHU Contributor(s): Michaël Gagnon
Dates:
This paper argues that bare determiners, as in the sentence "Many sat down", should be analyzed as involving the elision of a partitive phrase, as opposed to a noun phrase, as is commonly assumed (Lobeck 1991, 1995; Bernstein 1993; Panagiotidis 2003; Alexiadou & Gengel 2011; Corver and van Koppen 2009, 2011). This analysis is supported by: (i) the anaphoric interpretation of the bare determiners in context; (ii) the syntax of bare determiners; and (iii) deep event anaphora. Further, the adoption of partitive ellipsis comes with the suggestion that partitive DPs do not involve null intermediary noun phrases (cf. Jackendoff 1977, Sauerland and Yatsushiro 2004, and Ionin, Matushansky & Ruys 2006), but rather that determiners can take partitive phrases as internal arguments (Matthewson 2001). The existence of such a phenomenon also militates in favor of a meaning isomorphy approach to the licensing of ellipsis (Merchant 2001), rather than structural isomorphy (Fiengo & May 1994).

Anaphors and the Missing Link

New arguments for a traditional semantics of anaphora, and against one based on ellipsis, not only pronouns, but also for partitive ellipsis, and the contrastive anaphor "one," with special attention to event anaphora.

Linguistics

Non-ARHU Contributor(s): Michaël Gagnon
Dates:
Three types of nominal anaphors are investigated: (i) pronouns, (ii) partitive ellipsis and (iii) the contrastive anaphor 'one'. I argue that in each case, the representational basis for anaphora is the same, a semantic variable ranging over singular or plural entities, rather than syntactic as previous approaches have suggested.In the case of pronouns, I argue against syntactic D-type approaches (Elbourne 2005) and semantic D-type approaches (Cooper 1979). Instead, I present arguments in favor of the set variable representation assumed under Nouwen (2003)’s approach. Following this, I consider a number of cases usually taken to involve the elision of a noun phrase, and argue that instead they involve the deletion of a partitive phrase containing an anaphoric plural pronoun. Third, I turn to the contrastive anaphor ‘one’ and its null counterpart in French. Here again, I argue that the basis for anaphora is a semantic set variable, where this anaphor differs from pronouns in being of category N rather than D, and in having a pragmatic requirement for contrast. This analysis differs from previous ones which hold that this expression is a syntactic substitute of category N′, or the spell-out of the head of a number phrase followed by ellipsis of a noun phrase. Finally, I discuss the phenomenon of event anaphora. Given the phenomenon’s interaction with the anaphors discussed prior in this dissertation, I argue that it is better seen as a case of deferred reference to an event on the basis of anaphoric reference to a discourse segment, following Webber (1991). This contrasts with what I call metaphysical approaches, which hold that the anaphor directly resumes an event introduced to the context by a previous clause (Asher 1993; Moltmann 1997).

Statistical Knowledge and Learning in Phonology

A theory of how phonetics relates to phonology, evaluated by a Bayesian treatment of learning, with the result that phonology itself does not trade in "allophonic" processes.

Linguistics

Non-ARHU Contributor(s): Ewan Dunbar
Dates:
This dissertation deals with the theory of the phonetic component of grammar in a formal probabilistic inference framework: (1) it has been recognized since the beginning of generative phonology that some language-specific phonetic implementation is actually context-dependent, and thus it can be said that there are gradient “phonetic processes” in grammar in addition to categorical “phonological processes.” However, no explicit theory has been developed to characterize these processes. Meanwhile, (2) it is understood that language acquisition and perception are both really informed guesswork: the result of both types of inference can be reasonably thought to be a less-than-perfect committment, with multiple candidate grammars or parses considered and each associated with some degree of credence. Previous research has used probability theory to formalize these inferences in implemented computational models, especially in phonetics and phonology. In this role, computational models serve to demonstrate the existence of working learning/perception/parsing systems assuming a faithful implementation of one particular theory of human language, and are not intended to adjudicate whether that theory is correct. The current dissertation (1) develops a theory of the phonetic component of grammar and how it relates to the greater phonological system and (2) uses a formal Bayesian treatment of learning to evaluate this theory of the phonological architecture and for making predictions about how the resulting grammars will be organized. The coarse description of the consequence for linguistic theory is that the processes we think of as “allophonic” are actually language-specific, gradient phonetic processes, assigned to the phonetic component of grammar; strict allophones have no representation in the output of the categorical phonological grammar.

Similarity in L2 Phonology

Speakers of a second language may differ from native speakers in which sounds they treat as "similar." But how can we measure this perception of similarity, and determine what sorts of representations produce it?

Linguistics

Non-ARHU Contributor(s): Shannon Barrios
Dates:
Adult second language (L2) learners often experience difficulty producing and perceiving non-native phonological contrasts. Even highly proficient bilinguals, who have been exposed to an L2 for long periods of time, struggle with difficult contrasts, such as /r/-/l/ for Japanese learners of English. To account for the relative ease or diffculty with which L2 learners perceive and acquire non-native contrasts, theories of (L2) speech perception often appeal to notions of similarity. But how is similarity best determined? In this dissertation I explored the predictions of two theoretical approaches to similarity comparison in the second language, and asked: [1] How should L2 sound similarity be measured? [2] What is the nature of the representations that guide sound similarity? [3] To what extent can the influence of the native language be overcome? In Chapter 2, I tested a ‘legos’ (featural) approach to sound similarity. Given a distinctive feature analysis of Spanish and English vowels, I investigated the hypoth- esis that feature availability in the L1 grammar constrains which target language segments will be accurately perceived and acquired by L2 learners (Brown [1998], Brown [2000]). Our results suggest that second language acquisition of phonology is not limited by the phonological features used by the native language grammar, nor is the presence/use of a particular phonological feature in the native language grammar sufficient to trigger redeployment. I take these findings to imply that feature availability is neither a necessary, nor a sufficient condition to predict learning outcomes. In Chapter 3, I extended a computational model proposed by Feldman et al. [2009] to nonnative speech perception, in order to investigate whether a sophisticated ‘rulers’ (spatial) approach to sound similarity can better explain existing interlingual identification and discrimination data from Spanish monolinguals and advanced L1 Spanish late-learners of English, respectively. The model assumes that acoustic distributions of sounds control listeners’ ability to discriminate a given contrast. I found that, while the model succeeded in emulating certain aspects of human behavior, the model at present is incomplete and would have to be extended in various ways to capture several aspects of nonnative and L2 speech perception. In Chapter 4 I explored whether the phonological relatedness among sounds in the listeners native language impacts the perceived similarity of those sounds in the target language. Listeners were expected to be more sensitive to the contrast between sound pairs which are allophones of different phonemes than to sound pairs which are allophones of the same phoneme in their native language. Moreover, I hypothesized that L2 learners would experience difficulty perceiving and acquiring target language contrasts between sound pairs which are allophones of the same phoneme in their native language. Our results suggest that phonological relatedness may influence perceived similarity on some tasks, but does not seem to cause long-lasting perceptual difficulty in advanced L2 learners. On the basis of those findings, I argue that existing models have not been adequately explicit about the nature of the representations and processes involved in similarity-based comparisons of L1 and L2 sounds. More generally, I describe what I see as a desirable target for an explanatorily adequate theory of cross-language influence in L2 phonology.

Pragmatic enrichment in language processing and development

Even three-year-olds children can make complex pragmatic inferences, and understand indirect requests or assertions. Difficulties come from inexperience in conversation, lack of world knowledge, and trouble with scalar quantifiers.

Linguistics

Non-ARHU Contributor(s): Shevaun Lewis
Dates:
The goal of language comprehension for humans is not just to decode the semantic content of sentences, but rather to grasp what speakers intend to communicate. To infer speaker meaning, listeners must at minimum assess whether and how the literal meaning of an utterance addresses a question under discussion in the conversation. In cases of implicature, where the speaker intends to communicate more than just the literal meaning, listeners must access additional relevant information in order to understand the intended contribution of the utterance. I argue that the primary challenge for inferring speaker meaning is in identifying and accessing this relevant contextual information. In this dissertation, I integrate evidence from several different types of implicature to argue that both adults and children are able to execute complex pragmatic inferences relatively efficiently, but encounter some difficulty finding what is relevant in context. I argue that the variability observed in processing costs associated with adults’ computation of scalar implicatures can be better understood by examining how the critical contextual information is presented in the discourse context. I show that children’s oft-cited hyper-literal interpretation style is limited to scalar quantifiers. Even 3-year-olds are adept at understanding indirect requests and “parenthetical” readings of belief reports. Their ability to infer speaker meanings is limited only by their relative inexperience in conversation and lack of world knowledge.

The Syntax of Non-Syntactic Dependencies

"What and when did you eat?" "What did you cook and eat?" "You cooked and ate the chicken." These three constructions have eluded analysis.

Linguistics

Non-ARHU Contributor(s): Bradley Larson
Dates:
In this dissertation I explore the nature of interpretive dependencies in human language. In particular I investigate the limits of syntactically mediated interpretive dependencies as well as non-syntactic ones. Broadly speaking I investigate the limits of grammatical dependencies and note that current theory cannot possibly handle certain dependencies. That certain dependencies evade grammatical explanation requires a rethinking of the representations of those dependencies.The results of this investigation concern the primacy and the purview of the syntax component of the grammar. In short, the purview of syntactic relations is limited to c-command and if a c-command relation holds between two related elements, a syntactic relation must hold between them, either directly or indirectly. When c-command does not hold between the related elements, a syntactic dependency is not possible and the dependency must hold at a subsequent level of representation. To show this, I explore interpretive dependencies that I argue only superficially resemble standard, syntactically-mediated relations (such as Wh-gap dependencies). I show that these dependencies are not amenable to analysis as syntactically-mediated relations. These include Coordinated-Wh Questions like those explored in Gracanin-Yuksek 2007, Right Node Raising constructions like those explored in Postal 1974, and Across-the-board constructions like those explored in Williams 1978. Each of these involves an interpretive dependency that I claim cannot be derived syntactically. The above constructions evade explanation via traditional syntactic tools as well as semantic and pragmatic means of analysis. If the above constructions involve dependencies that cannot be construed as syntactically-, semantically-, or pragmatically-mediated, it must be the case that these otherwise normal dependencies are captured via other means, whatever that may be.