Ellen Lau
Associate Professor, Linguistics
Member, Maryland Language Science Center
Co-Director, KIT-Maryland MEG Lab
ellenlau@umd.edu
3416 E Marie Mount Hall
Get Directions
Research Expertise
Neurolinguistics
Psycholinguistics
Publications
Moving away from lexicalism in psycho- and neuro-linguistics
Moving away from lexicalism in psycho- and neuro-linguistics
In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.
Read More about Moving away from lexicalism in psycho- and neuro-linguistics
The Binding Problem 2.0: Beyond Perceptual Features
On the problem of binding to object indices, beyond perceptual features.
The “binding problem” has been a central question in vision science for some 30 years: When encoding multiple objects or maintaining them in working memory, how are we able to represent the correspondence between a specific feature and its corresponding object correctly? In this letter we argue that the boundaries of this research program in fact extend far beyond vision, and we call for coordinated pursuit across the broader cognitive science community of this central question for cognition, which we dub “Binding Problem 2.0”.
Read More about The Binding Problem 2.0: Beyond Perceptual Features
A subject relative clause preference in a split-ergative language: ERP evidence from Georgian
Is processing subject-relative clauses easier even in an ergative language?
A fascinating descriptive property of human language processing whose explanation is still debated is that subject-gap relative clauses are easier to process than object-gap relative clauses, across a broad range of languages with different properties. However, recent work suggests that this generalization does not hold in Basque, an ergative language, and has motivated an alternative generalization in which the preference is for gaps in morphologically unmarked positions—subjects in nominative-accusative languages, and objects and intransitive subjects in ergative-absolutive languages. Here we examined whether this generalization extends to another ergative-absolutive language, Georgian. ERP and self-paced reading results show a large anterior negativity and slower reading times when a relative clause is disambiguated to an object relative vs a subject relative. These data thus suggest that in at least some ergative-absolutive languages, the classic descriptive generalization—that object relative clauses are more costly than subject relative clauses—still holds.
Parallel processing in speech perception with local and global representations of linguistic context
MEG evidence for parallel representation of local and global context in speech processing.
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in non-identical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.
Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution
How does online comprehension of adjunct control ("before eating") compare to resolution of pronominal anaphora ("before he ate")?
The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression.
Enough time to get results? An ERP investigation of prediction with complex events
How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? This paper examines the question by comparing two kinds of compound verbs in Mandarin, and neural responses to the following direct object.
How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? We take advantage of the substantial differences in verb-argument structure provided by Mandarin, whose compound verbs encode complex event relations, such as resultatives (kid bit-broke lip: 'the kid bit his lip such that it broke') and coordinates (store owner hit-scolded employee 'the store owner hit and scolded an employee'). We tested sentences in which the object noun could be predicted on the basis of the preceding compound verb, and used N400 responses to the noun to index successful prediction. By varying the delay between verb and noun, we show that prediction is delayed in the resultative context (broken-BY-biting) relative to the coordinate one (hitting-AND-scolding). These results present a first step towards temporally dissociating the fine-grained subcomputations required to parse and interpret verb-argument relations.
Read More about Enough time to get results? An ERP investigation of prediction with complex events
Error-Driven Retrieval in Agreement Attraction Rarely Leads to Misinterpretation
"The bed by the lamps were undoubtedly quite bright." Does making this mistake in agreement, "were" instead of "was," make you less likely to notice the oddity of describing a bed as bright? This study shows that normally the answer is No.
Previous work on agreement computation in sentence comprehension motivates a model in which the parser predicts the verb’s number and engages in retrieval of the agreement controller only when it detects a mismatch between the prediction and the bottom-up input. It is the error-driven second stage of this process that is prone to similarity-based interference and can result in the illusory licensing of a subject–verb number agreement violation in the presence of a structurally irrelevant noun matching the number marking on the verb (‘The bed by the lamps were…’), giving rise to an effect known as ‘agreement attraction’. Here we ask to what extent the error-driven retrieval process underlying the illusory licensing alters the structural and thematic representation of the sentence. We use a novel dual-task paradigm that combines self-paced reading with a speeded forced choice task to investigate whether agreement attraction leads comprehenders to erroneously interpret the attractor as the thematic subject, which would indicate structural reanalysis. Participants read sentence fragments (‘The bed by the lamp/lamps was/were undoubtedly quite’) and completed the sentences by choosing between two adjectives (‘comfortable’/’bright’) which were either compatible with the subject’s head noun or with the attractor. We found the expected agreement attraction profile in the self-paced reading data but the interpretive error occurs on only a small subset of attraction trials, suggesting that in agreement attraction agreement checking rarely matches the thematic relation. We propose that illusory licensing of an agreement violation often reflects a low-level rechecking process that is only concerned with number and does not have an impact on the structural representation of the sentence. Interestingly, this suggests that error-driven repair processes can result in a globally inconsistent final sentence representation with a persistent mismatch between the subject and the verb.
Read More about Error-Driven Retrieval in Agreement Attraction Rarely Leads to Misinterpretation
Antecedent access mechanisms in pronoun processing: Evidence from the N400
Lexical decisions to a word after a pronoun are facilitated when it is semantically related to the pronoun’s antecedent. These priming effects may depend not on automatic spreading activation, but on the extent to which the relevant word is predicted.
Previous cross-modal priming studies showed that lexical decisions to words after a pronoun were facilitated when these words were semantically related to the pronoun’s antecedent. These studies suggested that semantic priming effectively measured antecedent retrieval during coreference. We examined whether these effects extended to implicit reading comprehension using the N400 response. The results of three experiments did not yield strong evidence of semantic facilitation due to coreference. Further, the comparison with two additional experiments showed that N400 facilitation effects were reduced in sentences (vs. word pair paradigms) and were modulated by the case morphology of the prime word. We propose that priming effects in cross-modal experiments may have resulted from task-related strategies. More generally, the impact of sentence context and morphological information on priming effects suggests that they may depend on the extent to which the upcoming input is predicted, rather than automatic spreading activation between semantically related words.
Read More about Antecedent access mechanisms in pronoun processing: Evidence from the N400
The temporal dynamics of structure and content in sentence comprehension: Evidence from fMRI-constrained MEG
fMRI implicates the TPJ, PTL, ATL and IFG regions of the left hemisphere in the processing of linguistic structure. But what are the temporal dynamics of their involvement? This MEG study provides some initial answers.
Advanced second language learners' perception of lexical tone contrasts
Mandarin tones are difficult for advanced L2 learners. But the difficulty comes primarily from the need to process tones lexically, and not from an inability to perceive tones phonetically.
Read More about Advanced second language learners' perception of lexical tone contrasts
The role of the IFG and pSTS in syntactic prediction: evidence from a parametric study of hierarchical structure in fMRI
Postdoc William Matchin, with Ellen Lau and Baggett Fellow Chris Hammerly, find a role for the anterior temporal lobe in semantic combination, and a role specifically in comprehension of thematic relations for the Angular Gyrus/Temporalparietal junction
A Direct Comparison of N400 Effects of Predictability and Incongruity in Adjective-Noun Combination
The N400 is modulated both by association and by predictability: but independently? Only slightly, show Ellen and her collaborators, suggesting that its senstivity to both does not come just from trouble integrating a word with its prior context.
The role of temporal predictability in semantic expectation: An MEG investigation
Is prediction of an upcoming item improved when its timing is predictable? Maybe yes for vision and audition, but evidently no for language, argue Ellen Lau and Elizabeth Nguyen.
Read More about The role of temporal predictability in semantic expectation: An MEG investigation
Spatiotemporal signatures of lexical-semantic prediction
Ellen Lau finds evidence that facilitatory effects of lexical–semantic prediction on the electrophysiological response 350–450 ms postonset reflect modulation of activity in left anterior temporal cortex.
Read More about Spatiotemporal signatures of lexical-semantic prediction
Additive effects of repetition and predictability on lexical semantic processing during comprehension
Word repetition and predictability have qualitatively similar and additive effects on the N400 amplitude in ERP.
Automatic semantic facilitation in anterior temporal cortex revealed through multimodal neuroimaging
Bottom-up effects of context on semantic memory, plumbed by a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals.
Dissociating N400 effects of prediction from association in single word contexts
The N400 component in ERP is modulated both by the predictability of the stimulus, and by its congruence with the semantic context. Ellen Lau and collaborators show that the effect of the former is much greater.
A lexical basis for N400 context effects: Evidence from MEG
Within-subject MEG studies on the topography of N400 effects suggest that such effects reflect facilitated access to lexical information, and not difficulty integrating a word with its semantic context.
The electrophysiological response to words during the ‘N400’ time window (approximately 300–500 ms post-onset) is affected by the context in which the word is presented, but whether this effect reflects the impact of context on access of the stored lexical information itself or, alternatively, post-access integration processes is still an open question with substantive theoretical consequences. One challenge for integration accounts is that contexts that seem to require different levels of integration for incoming words (i.e., sentence frames vs. prime words) have similar effects on the N400 component measured in ERP. In this study we compare the effects of these different context types directly, in a within-subject design using MEG, which provides a better opportunity for identifying topographical differences between electrophysiological components, due to the minimal spatial distortion of the MEG signal. We find a qualitatively similar contextual effect for both sentence frame and prime-word contexts, although the effect is smaller in magnitude for shorter word prime contexts. Additionally, we observe no difference in response amplitude between sentence endings that are explicitly incongruent and target words that are simply part of an unrelated pair. These results suggest that the N400 effect does not reflect semantic integration difficulty. Rather, the data are consistent with an account in which N400 reduction reflects facilitated access of lexical information.
The Predictive Nature of Language Comprehension
Data from fMRI, MEG and EEG show that predictive processing plays a central role in language comprehension, for instance by facilitating lexical access, as indexed by N400 effects in ERP.
This dissertation explores the hypothesis that predictive processing—the access and construction of internal representations in advance of the external input that supports them—plays a central role in language comprehension. Linguistic input is frequently noisy, variable, and rapid, but it is also subject to numerous constraints. Predictive processing could be a particularly useful approach in language comprehension, as predictions based on the constraints imposed by the prior context could allow computation to be speeded and noisy input to be disambiguated. Decades of previous research have demonstrated that the broader sentence context has an effect on how new input is processed, but less progress has been made in determining the mechanisms underlying such contextual effects. This dissertation is aimed at advancing this second goal, by using both behavioral and neurophysiological methods to motivate predictive or top-down interpretations ofcontextual effects and to test particular hypotheses about the nature of the predictive mechanisms in question. The first part of the dissertation focuses on the lexical-semantic predictions made possible by word and sentence contexts. MEG and fMRI experiments, in conjunction with a meta-analysis of the previous neuroimaging literature, support the claim that an ERP effect classically observed in response to contextual manipulations—the N400 effect—reflects facilitation in processing due to lexical- semantic predictions, and that these predictions are realized at least in part through top-down changes in activity in left posterior middle temporal cortex, the cortical region thought to represent lexical-semantic information in long-term memory. The second part of the dissertation focuses on syntactic predictions. ERP and reaction time data suggest that the syntactic requirements of the prior context impacts processing of the current input very early, and that predicting the syntactic position in which the requirements can be fulfilled may allow the processor to avoid a retrieval mechanism that is prone to similarity-based interference errors. In sum, the results described here are consistent with the hypothesis that a significant amount of language comprehension takes place in advance of the external input, and suggest future avenues of investigation towards understanding the mechanisms that make this possible.
Lingering effects of disfluent material on the comprehension of garden path sentences
Do we experience garden path effects when a disfluent speaker replaces one verb with another (as in "chosen, uh, I mean selected") and only one of the two yields the garden-path ambiguity?
In two experiments, we tested for lingering effects of verb replacement disfluencies on the processing of garden path sentences that exhibit the main verb/reduced relative (MV/RR) ambiguity. Participants heard sentences with revisions like "The little girl chosen, uh, selected for the role celebrated with her parents and friends." We found that the syntactic ambiguity associated with the reparandum verb involved in the disfluency (here "chosen") had an influence on later parsing: Garden path sentences that included such revisions were more likely to be judged grammatical if the reparandum verb was structurally unambiguous. Conversely, ambiguous non-garden path sentences were more likely to be judged ungrammatical if the structurally unambiguous disfluency verb was inconsistent with the final reading. Results support a model of disfluency processing in which the syntactic frame associated with the replacement verb ‘‘overlays’’ the previous verb’s structure rather than actively deleting the already-built tree.