Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics.
Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.
A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.
"He drank more wine than I did and also danced more than I did." Alexis Wellwood gives a unified analysis for both adnominal and adverbal "more," with Valentine Hacquard and faculty visitor Roumyana Pancheva.
This squib investigates parallels between nominal and verbal comparatives. Building on key insights of Hackl (2000) and Bale & Barner (2009), we show that more behaves uniformly when it combines with nominal and verbal predicates: (i) it cannot combine with singular count NPs or perfective telic VPs; (ii) grammatical properties of the predicates determine the scale of comparison—plural marked NPs and habitual VPs are compared on a scale of cardinality, whereas mass NPs and perfective (atelic) VPs are (often) compared along non-cardinal, though monotonic, scales. Taken together, our findings confirm and strengthen parallels that have independently been drawn between the nominal and verbal domains. In addition, our discussion and data, drawn from English, Spanish, and Bulgarian, suggest that the semantic contribution of "more" can be given a uniform analysis.
Distributivity and modality: where "each" may go, "every" can't follow
A new syntactic account of scopal restrictions on universal quantifiers in sentences with an epistemic modal.
Von Fintel and Iatridou (2003) observed a striking pattern of scopal non-interaction between phrases headed by strong quantifiers like every and epistemically interpreted modal auxiliaries. Tancredi (2007) and Huitink (2008) observed that von Fintel and Iatridou’s proposed constraint, the Epistemic Containment Principle (ECP), does not apply uniformly: it does not apply to strong quantifiers headed by each. We consider the ECP effect in light of the differential behavior of each and every in the environment of wh-, negative, and generic operators as described by Beghelli and Stowell (1997). Assuming that epistemic and root modals merge at two different syntactic heights (e.g. Cinque 1999) and that modals may act as unselective binders (Heim 1982), we extend Beghelli and Stowell’s topological approach to quantifier scope interactions in order to formulate a novel syntactic account of the ECP.
Poverty of the Stimulus Revisited
Countering recent critiques, Paul Pietroski and collaborators defend the idea that some invariances in human languages reflect an innate human endowment, as opposed to common experience.
Linguistics
Contributor(s):Paul Pietroski Non-ARHU Contributor(s):
Robert Berwick, Beracah Yankama, Noam Chomsky
Dates:
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect 'the innate schematism of mind that is applied to the data of experience' and that 'might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge'. Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various 'poverty of stimulus' (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire.
Selective learning the acquisition of Kannada ditransitives
Even young children have a highly abstract representation of ditransitive syntax.
In this paper we bring evidence from language acquisition to bear on the debate over the relative abstractness of children’s grammatical knowledge. We first identify one aspect of syntactic representation that exhibits a range of syntactic, morphological and semantic consequences both within and across languages, namely the hierarchical structure of ditransitive verb phrases. While the semantic consequences of this structure are parallel in English, Kannada, and Spanish, the word order and morphological reflexes of this structure diverge. Next we demonstrate that children learning Kannada have command of the relation between morphological form and semantic interpretation in ditransitives with respect to quantifier-variable binding. Finally, we offer a proposal on how a selective learning mechanism might succeed in identifying the appropriate structures in this domain despite the variability in surface expression.
Structured Access in Sentence Comprehension
Structural cues are favored over lexical features in access to memory for resolution of agreement and reflexive anaphora: Brian Dillon makes the point with several experiments on English and Chinese, and a parsing strategy implemented in ACT-R.
Linguistics
Non-ARHU Contributor(s):
Brian W. Dillon
Dates:
This thesis is concerned with the nature of memory access during the construction of long-distance dependencies in online sentence comprehension. In recent years, an intense focus on the computational challenges posed by long-distance dependencies has proven to be illuminating with respect to the characteristics of the architecture of the human sentence processor, suggesting a tight link between general memory access procedures and sentence processing routines (Lewis & Vasishth 2005; Lewis, Vasishth, & Van Dyke 2006; Wagers, Lau & Phillips 2009). The present thesis builds upon this line of research, and its primary aim is to motivate and defend the hypothesis that the parser accesses linguistic memory in an essentially structured fashion for certain long-distance dependencies. In order to make this case, I focus on the processing of reflexive and agreement dependencies, and ask whether or not non- structural information such as morphological features are used to gate memory access during syntactic comprehension. Evidence from eight experiments in a range of methodologies in English and Chinese is brought to bear on this question, providing arguments from interference effects and time-course effects that primarily syntactic information is used to access linguistic memory in the construction of certain long- distance dependencies. The experimental evidence for structured access is compatible with a variety of architectural assumptions about the parser, and I present one implementation of this idea in a parser based on the ACT-R memory architecture. In the context of such a content-addressable model of memory, the claim of structured access is equivalent to the claim that only syntactic cues are used to query memory. I argue that structured access reflects an optimal parsing strategy in the context of a noisy, interference-prone cognitive architecture: abstract structural cues are favored over lexical feature cues for certain structural dependencies in order to minimize memory interference in online processing.
Some arguments and non-arguments for reductionist accounts of syntactic phenomena
Can psycholinguistics tell us whether a syntactic pattern is explained by grammar or by processing? Colin Phillips explores the question in relation to island constraints, agreement attraction, constraints on anaphora, and comparatives.
Many syntactic phenomena have received competing accounts, either in terms of formal grammatical mechanisms, or in terms of independently motivated properties of language processing mechanisms (“reductionist” accounts). A variety of different types of argument have been put forward in efforts to distinguish these competing accounts. This article critically examines a number of arguments that have been offered as evidence in favour of formal or reductionist analyses, and concludes that some types of argument are more decisive than others. It argues that evidence from graded acceptability effects and from isomorphism between acceptability judgements and on-line comprehension profiles are less decisive. In contrast, clearer conclusions can be drawn from cases of overgeneration, where there is a discrepancy between acceptability judgements and the representations that are briefly constructed on-line, and from tests involving individual differences in cognitive capacity. Based on these arguments, the article concludes that a formal grammatical account is better supported in some domains, and that a reductionist account fares better in other domains. Phenomena discussed include island constraints, agreement attraction, constraints on anaphora, and comparatives.
Syntactic and Semantic Predictors of Tense in Hindi: An ERP Investigation
Brian Dillon and Colin Phillips find different ERP signals for a grammatical error, depending on whether its detection was based on semantic versus morphosyntactic information.
Linguistics
Contributor(s):Colin Phillips Non-ARHU Contributor(s):
Brian Dillon, Andrew Nevins, Alison C. Austin
Dates:
Although there is broad agreement that many ERP components reflect error signals generated during an unexpected linguistic event, there are least two distinct aspects of the process that the ERP signals may reflect. The first is the content of an error, which is the local discrepancy between an observed form and any expectations about upcoming forms, without any reference to why those expectations were held. The second aspect is the cause of an error, which is a context-aware analysis of why the error arose. The current study examines the processes involved in prediction of past tense marking on verbal morphology in Hindi. This is a case where an error with the same local characteristics can arise from very different cues, one syntactic in origin (ergative case marking), and the other semantic in origin (a past tense adverbial). Results suggest that the parser does indeed track the cause in addition to the content of errors. Despite the fact that the critical manipulation of verb tense marking was identical across cue types, the nature of the cue led to distinct patterns of ERPs in response to anomalous verbal morphology. When verb tense was predicted based upon semantic cues, an incorrect future tense form elicited an early negativity in the 200-400 ms interval with a posterior distribution. In contrast, when verb tense was predicted based upon morphosyntactic cues, an incorrect future tense form elicited a right-lateralized anterior negativity (RAN) during the 300-500 ms interval, as well as a P600 response with a broad distribution.
Our understanding of human learning is increasingly informed by findings from multiple fields—psychology, neuroscience, computer science, linguistics, and education. A convergence of insights is forging a “new science of learning” within cognitive science, which promises to play a key role in developing intelligent machines (1, 2). A long-standing fundamental issue in theories of human learning is whether there are specialized learning mechanisms for certain tasks or spheres of activity (domains). For example, is learning how to open a door (turning the handle before pulling) the same kind of “learning” as putting up and taking down scaffolding (where disassembly must be done in the reverse order of assembly)? Surprisingly, this issue plays out within the domain of human language.
Self-monitoring and feedback in disordered speech production
What is the neural basis for the integration of auditory and somatosensory feedback in speech production? Josh Riley pursues the question through neuroimaging studies of Foreign Accent Syndrome and Peristent Development Stuttering.
Linguistics
Non-ARHU Contributor(s):
Joshua Riley-Graham
Dates:
The precise contribution and mechanism of sensory feedback (particularly auditory feedback) in successful speech production is unclear. Some models of speech production, such as DIVA, assert that speech production is based on attempting to produce auditory (and/or somatosensory targets; e.g. Guenther et al. 2006), and thus assign a central role to sensory feedback for successful speech motor control. These models make explicit predictions about the neural basis of speech production and the integration of auditory and somatosensory feedback and predict predict basal ganglia involvement in speech motor control. In order to test the implications of models depending on the integration of sensory feedback for speech, we present neuroimaging studies of two disorders of speech production in the absence of apraxia or dysarthria - one acquired (Foreign Accent Syndrome; FAS) and one developmental (Persistent Developmental Stuttering; PDS). Our results broadly confirm the predictions of the extended DIVA (Bohland et al. 2010) model, and emphasize the importance of the basal ganglia, especially the basal ganglia-thalamic-cortical (BGTC) loops. I argue that FAS should be thought of as a disorder of excessive speech sensory feedback, and that fluency in PDS depends on successful integration of speech sensory feedback with feedforward control commands.
Windows into Sensory Integration and Rates in Language Processing: Insights from Signed and Spoken Languages
So-One Hwang compares the time course of visual to auditory speech perception, in signed versus spoken languages, finding evidence for time pressures that apply in both modalities.
Linguistics
Non-ARHU Contributor(s):
So-One Hwang
Dates:
This dissertation explores the hypothesis that language processing proceeds in “windows” that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the seemingly effortless process of language comprehension is the perceiver’s knowledge about the rate at which linguistic form and meaning unfold in time and the ability to adapt to variations in the input.
The vast body of work in this area has focused on speech perception, where the goal is to determine how linguistic information is recovered from acoustic signals. Testing some of these theories in the visual processing of American Sign Language (ASL) provides a unique opportunity to better understand how sign languages are processed and which aspects of speech perception models are in fact about language perception across modalities.
The first part of the dissertation presents three psychophysical experiments investigating temporal integration windows in sign language perception by testing the intelligibility of locally time-reversed sentences. The findings demonstrate the contribution of modality for the time-scales of these windows, where signing is successively integrated over longer durations (~ 250-300 ms) than in speech (~ 50-60 ms), while also pointing to modality-independent mechanisms, where integration occurs in durations that correspond to the size of linguistic units. The second part of the dissertation focuses on production rates in sentences taken from natural conversations of English, Korean, and ASL. Data from word, sign, morpheme, and syllable rates suggest that while the rate of words and signs can vary from language to language, the relationship between the rate of syllables and morphemes is relatively consistent among these typologically diverse languages. The results from rates in ASL also complement the findings in perception experiments by confirming that time-scales at which phonological units fluctuate in production match the temporal integration windows in perception.
These results are consistent with the hypothesis that there are modalityindependent time pressures for language processing, and discussions provide a synthesis of converging findings from other domains of research and propose ideas for future investigations.