Skip to main content
Skip to main content

Ellen Lau

Photo of Ellen Lau

Associate Professor, Linguistics

Co-Director, KIT-Maryland MEG Lab

Faculty, Program in Neuroscience and Cognitive Science

3416 E Marie Mount Hall
Get Directions

Research Expertise

Neurolinguistics
Psycholinguistics

Publications

Moving away from lexicalism in psycho- and neuro-linguistics

Moving away from lexicalism in psycho- and neuro-linguistics

Linguistics

Contributor(s): Ellen Lau, Alex Krauska
Dates:

In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

Read More about Moving away from lexicalism in psycho- and neuro-linguistics

The Binding Problem 2.0: Beyond Perceptual Features

On the problem of binding to object indices, beyond perceptual features.

Linguistics

Contributor(s): Ellen Lau, Xinchi Yu
Dates:

The “binding problem” has been a central question in vision science for some 30 years: When encoding multiple objects or maintaining them in working memory, how are we able to represent the correspondence between a specific feature and its corresponding object correctly? In this letter we argue that the boundaries of this research program in fact extend far beyond vision, and we call for coordinated pursuit across the broader cognitive science community of this central question for cognition, which we dub “Binding Problem 2.0”.

Read More about The Binding Problem 2.0: Beyond Perceptual Features

A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Is processing subject-relative clauses easier even in an ergative language?

Linguistics

Contributor(s): Ellen Lau, Maria Polinsky
Non-ARHU Contributor(s): Nancy Clarke, Michaela Socolof, Rusudan Asatiani
Dates:

A fascinating descriptive property of human language processing whose explanation is still debated is that subject-gap relative clauses are easier to process than object-gap relative clauses, across a broad range of languages with different properties. However, recent work suggests that this generalization does not hold in Basque, an ergative language, and has motivated an alternative generalization in which the preference is for gaps in morphologically unmarked positions—subjects in nominative-accusative languages, and objects and intransitive subjects in ergative-absolutive languages. Here we examined whether this generalization extends to another ergative-absolutive language, Georgian. ERP and self-paced reading results show a large anterior negativity and slower reading times when a relative clause is disambiguated to an object relative vs a subject relative. These data thus suggest that in at least some ergative-absolutive languages, the classic descriptive generalization—that object relative clauses are more costly than subject relative clauses—still holds.

Read More about A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Parallel processing in speech perception with local and global representations of linguistic context

MEG evidence for parallel representation of local and global context in speech processing.

Linguistics

Contributor(s): Ellen Lau, Philip Resnik, Shohini Bhattasali
Non-ARHU Contributor(s): Christian Brodbeck, Aura Cruz Heredia, Jonathan Simon
Dates:

Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in non-identical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.

Read More about Parallel processing in speech perception with local and global representations of linguistic context

Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution

How does online comprehension of adjunct control ("before eating") compare to resolution of pronominal anaphora ("before he ate")?

Linguistics | Philosophy

Contributor(s): Alexander Williams, Ellen Lau
Non-ARHU Contributor(s): Jeffrey J. Green *18, Michael McCourt *21
Dates:

The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression.

Read More about Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution

Enough time to get results? An ERP investigation of prediction with complex events

How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? This paper examines the question by comparing two kinds of compound verbs in Mandarin, and neural responses to the following direct object.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Chia-Hsuan Liao (*20)
Dates:

How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? We take advantage of the substantial differences in verb-argument structure provided by Mandarin, whose compound verbs encode complex event relations, such as resultatives (kid bit-broke lip: 'the kid bit his lip such that it broke') and coordinates (store owner hit-scolded employee 'the store owner hit and scolded an employee'). We tested sentences in which the object noun could be predicted on the basis of the preceding compound verb, and used N400 responses to the noun to index successful prediction. By varying the delay between verb and noun, we show that prediction is delayed in the resultative context (broken-BY-biting) relative to the coordinate one (hitting-AND-scolding). These results present a first step towards temporally dissociating the fine-grained subcomputations required to parse and interpret verb-argument relations.

Read More about Enough time to get results? An ERP investigation of prediction with complex events

Error-Driven Retrieval in Agreement Attraction Rarely Leads to Misinterpretation

"The bed by the lamps were undoubtedly quite bright." Does making this mistake in agreement, "were" instead of "was," make you less likely to notice the oddity of describing a bed as bright? This study shows that normally the answer is No.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Zoe Schlueter (*17), Dan Parker (*14)
Dates:

Previous work on agreement computation in sentence comprehension motivates a model in which the parser predicts the verb’s number and engages in retrieval of the agreement controller only when it detects a mismatch between the prediction and the bottom-up input. It is the error-driven second stage of this process that is prone to similarity-based interference and can result in the illusory licensing of a subject–verb number agreement violation in the presence of a structurally irrelevant noun matching the number marking on the verb (‘The bed by the lamps were…’), giving rise to an effect known as ‘agreement attraction’. Here we ask to what extent the error-driven retrieval process underlying the illusory licensing alters the structural and thematic representation of the sentence. We use a novel dual-task paradigm that combines self-paced reading with a speeded forced choice task to investigate whether agreement attraction leads comprehenders to erroneously interpret the attractor as the thematic subject, which would indicate structural reanalysis. Participants read sentence fragments (‘The bed by the lamp/lamps was/were undoubtedly quite’) and completed the sentences by choosing between two adjectives (‘comfortable’/’bright’) which were either compatible with the subject’s head noun or with the attractor. We found the expected agreement attraction profile in the self-paced reading data but the interpretive error occurs on only a small subset of attraction trials, suggesting that in agreement attraction agreement checking rarely matches the thematic relation. We propose that illusory licensing of an agreement violation often reflects a low-level rechecking process that is only concerned with number and does not have an impact on the structural representation of the sentence. Interestingly, this suggests that error-driven repair processes can result in a globally inconsistent final sentence representation with a persistent mismatch between the subject and the verb.

Read More about Error-Driven Retrieval in Agreement Attraction Rarely Leads to Misinterpretation

Antecedent access mechanisms in pronoun processing: Evidence from the N400

Lexical decisions to a word after a pronoun are facilitated when it is semantically related to the pronoun’s antecedent. These priming effects may depend not on automatic spreading activation, but on the extent to which the relevant word is predicted.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Sol Lago (*14), Anna Namyst, Lena Jäger
Dates:

Previous cross-modal priming studies showed that lexical decisions to words after a pronoun were facilitated when these words were semantically related to the pronoun’s antecedent. These studies suggested that semantic priming effectively measured antecedent retrieval during coreference. We examined whether these effects extended to implicit reading comprehension using the N400 response. The results of three experiments did not yield strong evidence of semantic facilitation due to coreference. Further, the comparison with two additional experiments showed that N400 facilitation effects were reduced in sentences (vs. word pair paradigms) and were modulated by the case morphology of the prime word. We propose that priming effects in cross-modal experiments may have resulted from task-related strategies. More generally, the impact of sentence context and morphological information on priming effects suggests that they may depend on the extent to which the upcoming input is predicted, rather than automatic spreading activation between semantically related words.

Read More about Antecedent access mechanisms in pronoun processing: Evidence from the N400

The temporal dynamics of structure and content in sentence comprehension: Evidence from fMRI-constrained MEG

fMRI implicates the TPJ, PTL, ATL and IFG regions of the left hemisphere in the processing of linguistic structure. But what are the temporal dynamics of their involvement? This MEG study provides some initial answers.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): William Matchin, Chris Hammerly, Christian Brodbeck
Dates:
Humans have a striking capacity to combine words into sentences that express new meanings. Previous research has identified key brain regions involved in this capacity, but little is known about the time course of activity in these regions, as hemodynamic methods such as fMRI provide little insight into temporal dynamics of neural activation. We performed an MEG experiment to elucidate the temporal dynamics of structure and content processing within four brain regions implicated by fMRI data from the same experiment: the temporo-parietal junction (TPJ), the posterior temporal lobe (PTL), the anterior temporal lobe (ATL), and the anterior inferior frontal gyrus (IFG). The TPJ showed increased activity for both structure and content near the end of the sentence, consistent with a role in incremental interpretation of event semantics. The PTL, a region not often associated with core aspects of syntax, showed a strong early effect of structure, consistent with predictive parsing models, and both structural and semantic context effects on function words. These results provide converging evidence that the PTL plays an important role in lexicalized syntactic processing. The ATL and IFG, regions traditionally associated with syntax, showed minimal effects of sentence structure. The ATL, PTL and IFG all showed effects of semantic content: increased activation for real words relative to nonwords. Our fMRI-guided MEG investigation therefore helps identify syntactic and semantic aspects of sentence comprehension in the brain in both spatial and temporal dimensions.

Read More about The temporal dynamics of structure and content in sentence comprehension: Evidence from fMRI-constrained MEG

Advanced second language learners' perception of lexical tone contrasts

Mandarin tones are difficult for advanced L2 learners. But the difficulty comes primarily from the need to process tones lexically, and not from an inability to perceive tones phonetically.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Eric Pelzl, Taomei Guo, Robert DeKeyser
Dates:
It is commonly believed that second language (L2) acquisition of lexical tones presents a major challenge for learners from nontonal language backgrounds. This belief is somewhat at odds with research that consistently shows beginning learners making quick gains through focused tone training, as well as research showing advanced learners achieving near-native performance in tone identification tasks. However, other long-term difficulties related to L2 tone perception may persist, given the additional demands of word recognition and the effects of context. In the current study, we used behavioral and event-related potential (ERP) experiments to test whether perception of Mandarin tones is difficult for advanced L2 learners in isolated syllables, disyllabic words in isolation, and disyllabic words in sentences. Stimuli were more naturalistic and challenging than in previous research. While L2 learners excelled at tone identification in isolated syllables, they performed with very low accuracy in rejecting disyllabic tonal nonwords in isolation and in sentences. We also report ERP data from critical mismatching words in sentences; while L2 listeners showed no significant differences in responses in any condition, trends were not inconsistent with the overall pattern in behavioral results of less sensitivity to tone mismatches than to semantic or segmental mismatches. We interpret these results as evidence that Mandarin tones are in fact difficult for advanced L2 learners. However, the difficulty is not due primarily to an inability to perceive tones phonetically, but instead is driven by the need to process tones lexically, especially in multisyllable words.

Read More about Advanced second language learners' perception of lexical tone contrasts

The role of the IFG and pSTS in syntactic prediction: evidence from a parametric study of hierarchical structure in fMRI

Postdoc William Matchin, with Ellen Lau and Baggett Fellow Chris Hammerly, find a role for the anterior temporal lobe in semantic combination, and a role specifically in comprehension of thematic relations for the Angular Gyrus/Temporalparietal junction

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): William Matchin, Chris Hammerly
Dates:
Sentences encode hierarchical structural relations among words. Several neuroimaging experiments aiming to localize combinatory operations responsible for creating this structure during sentence comprehension have contrasted short, simple phrases and sentences to unstructured controls. Some of these experiments have revealed activation in the left inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS), associating these regions with basic syntactic combination. However, the wide variability of these effects across studies raises questions about this interpretation. In an fMRI experiment, we provide support for an alternative hypothesis: these regions underlie top-down syntactic predictions that facilitate sentence processing but are not necessary for building syntactic structure. We presented stimuli with three levels of structure: unstructured lists, two-word phrases, and simple, short sentences; and two levels of content: natural stimuli with real words and stimuli with open-class items replaced with pseudowords (jabberwocky). While both the phrase and sentence conditions engaged syntactic combination, our experiment only encouraged syntactic prediction in the sentence condition. We found increased activity for both natural and jabberwocky sentences in the left IFG (pars triangularis and pars orbitalis) and pSTS relative to unstructured word lists and two-word phrases, but we did not find any such effects for two-word phrases relative to unstructured word lists in these areas. Our results are most consistent with the hypothesis that increased activity in IFG and pSTS for basic contrasts of structure reflects syntactic prediction. The pars opercularis of the IFG showed a response profile consistent with verbal working memory. We found incremental effects of structure in the anterior temporal lobe (ATL), and increased activation only for sentences in the angular gyrus (AG)/temporaleparietal junction (TPJ) e both regions showed these effects for stimuli with all real words. These findings support a role for the ATL in semantic combination and the AG/TPJ in thematic processing.

A Direct Comparison of N400 Effects of Predictability and Incongruity in Adjective-Noun Combination

The N400 is modulated both by association and by predictability: but independently? Only slightly, show Ellen and her collaborators, suggesting that its senstivity to both does not come just from trouble integrating a word with its prior context.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Anna Namyst, Allison Fogel, Tania Delgado
Dates:
Previous work has shown that the N400 ERP component is elicited by all words, whether presented in isolation or in structured contexts, and that its amplitude is modulated by semantic association and contextual predictability. What is less clear is the extent to which the N400 response is modulated by semantic incongruity when predictability is held constant. In the current study we examine N400 modulation associated with independent manipulations of predictability and congruity in an adjective-noun paradigm that allows us to precisely control predictability through corpus counts. Our results demonstrate small N400 effects of semantic congruity (yellow bag vs. innocent bag), and much more robust N400 effects of predictability (runny nose vs. dainty nose) under the same conditions. These data argue against unitary N400 theories according to which N400 effects of both predictability and incongruity reflect a common process such as degree of integration difficulty, as large N400 effects of predictability were observed in the absence of large N400 effects of incongruity. However, the data are consistent with some versions of unitary ‘facilitated access’ N400 theories, as well as multiple-generator accounts according to which the N400 can be independently modulated by facilitated conceptual/lexical access (as with predictability) and integration diffculty (as with incongruity, perhaps to a greater extent in full sentential contexts).

Read More about A Direct Comparison of N400 Effects of Predictability and Incongruity in Adjective-Noun Combination

The role of temporal predictability in semantic expectation: An MEG investigation

Is prediction of an upcoming item improved when its timing is predictable? Maybe yes for vision and audition, but evidently no for language, argue Ellen Lau and Elizabeth Nguyen.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Elizabeth Nguyen
Dates:
Prior research suggests that prediction of semantic and syntactic information prior to the bottom-up input is an important component of language comprehension. Recent work in basic visual and auditory perception suggests that the ability to predict features of an upcoming stimulus is even more valuable when the exact timing of the stimulus presentation can also be predicted. However, it is unclear whether lexical-semantic predictions are similarly locked to a particular time, as previous studies of semantic predictability have used a predictable presentation rate. In the current study we vary the temporal predictability of target word presentation in the visual modality and examine the consequences for effects of semantic predictability on the event-related N400 response component, as measured with magnetoencephalography (MEG). Although we observe robust effects of semantic predictability on the N400 response, we find no evidence that these effects are larger in the presence of temporal predictability. These results suggest that, at least in the visual modality, lexical-semantic predictions may be maintained over a broad time-window, which could allow predictive facilitation to survive the presence of optional modifiers in natural language settings. The results also indicate that the mechanisms supporting predictive facilitation may vary in important ways across tasks and cognitive domains.

Read More about The role of temporal predictability in semantic expectation: An MEG investigation

Spatiotemporal signatures of lexical-semantic prediction

Ellen Lau finds evidence that facilitatory effects of lexical–semantic prediction on the electrophysiological response 350–450 ms postonset reflect modulation of activity in left anterior temporal cortex.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Kirsten Weber, Alexandre Gramfort, Matti Hamalainen, Gina Kuperberg
Dates:
Although there is broad agreement that top-down expectations can facilitate lexical–semantic processing, the mechanisms driving these effects are still unclear. In particular, while previous electroencephalography (EEG) research has demonstrated a reduction in the N400 response to words in a supportive context, it is often challenging to dissociate facilitation due to bottom-up spreading activation from facilitation due to top-down expectations. The goal of the current study was to specifically determine the cortical areas associated with facilitation due to top-down prediction, using magnetoencephalography (MEG) recordings supplemented by EEG and functional magnetic resonance imaging (fMRI) in a semantic priming paradigm. In order to modulate expectation processes while holding context constant, we manipulated the proportion of related pairs across 2 blocks (10 and 50% related). Event-related potential results demonstrated a larger N400 reduction when a related word was predicted, and MEG source localization of activity in this time-window (350–450 ms) localized the differential responses to left anterior temporal cortex. fMRI data from the same participants support the MEG localization, showing contextual facilitation in left anterior superior temporal gyrus for the high expectation block only. Together, these results provide strong evidence that facilitatory effects of lexical–semantic prediction on the electrophysiological response 350–450 ms postonset reflect modulation of activity in left anterior temporal cortex.

Read More about Spatiotemporal signatures of lexical-semantic prediction

Additive effects of repetition and predictability on lexical semantic processing during comprehension

Word repetition and predictability have qualitatively similar and additive effects on the N400 amplitude in ERP.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Wing Yee Chow, Sol Lago, Shannon Barrios, Dan Parker, Giovanna Morini
Dates:
Previous research has shown that neural responses to words during sentence comprehension are sensitive to both lexical repetition and a word’s predictability in context. While previous research has often contrasted the effects of these variables (e.g. by looking at cases in which word repetition violates sentence-level constraints), little is known about how they work in tandem. In the current study we examine how recent exposure to a word and its predictability in context combine to impact lexical semantic processing. We devise a novel paradigm that combines reading comprehension with a recognition memory task, allowing for an orthogonal manipulation of a word’s predictability and its repetition status. Using event-related brain potentials (ERPs), we show that word repetition and predictability have qualitatively similar and additive effects on the N400 amplitude. We propose that prior exposure to a word and predictability impact lexical semantic processing in an additive and independent fashion.

Read More about Additive effects of repetition and predictability on lexical semantic processing during comprehension

Automatic semantic facilitation in anterior temporal cortex revealed through multimodal neuroimaging

Bottom-up effects of context on semantic memory, plumbed by a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Alexandre Gramfort, Matti Hamalainen, Gina Kuperberg
Dates:
A core property of human semantic processing is the rapid, facilitatory influence of prior input on extracting the meaning of what comes next, even under conditions of minimal awareness. Previous work has shown a number of neurophysiological indices of this facilitation, but the mapping between time course and localization— critical for separating automatic semantic facilitation from other mechanisms—has thus far been unclear. In the current study, we used a multimodal imaging approach to isolate early, bottom-up effects of context on semantic memory, acquiring a combination of electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) measurements in the same individuals with a masked semantic priming paradigm. Across techniques, the results provide a strikingly convergent picture of early automatic semantic facilitation. Event-related potentials demonstrated early sensitivity to semantic association between 300 and 500 ms; MEG localized the differential neural response within this time window to the left anterior temporal cortex, and fMRI localized the effect more precisely to the left anterior superior temporal gyrus, a region previously implicated in semantic associative processing. However, fMRI diverged from early EEG/MEG measures in revealing semantic enhancement effects within frontal and parietal regions, perhaps reflecting downstream attempts to consciously access the semantic features of the masked prime. Together, these results provide strong evidence that automatic associative semantic facilitation is realized as reduced activity within the left anterior superior temporal cortex between 300 and 500 ms after a word is presented, and emphasize the importance of multimodal neuroimaging approaches in distinguishing the contributions of multiple regions to semantic processing

Dissociating N400 effects of prediction from association in single word contexts

The N400 component in ERP is modulated both by the predictability of the stimulus, and by its congruence with the semantic context. Ellen Lau and collaborators show that the effect of the former is much greater.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Phillip Holcomb, Gina Kuperberg
Dates:
When a word is preceded by a supportive context such as a semantically associated word or a strongly constraining sentence frame, the N400 component of the ERP is reduced in amplitude. An ongoing debate is the degree to which this reduction reflects a passive spread of activation across long-term semantic memory representations as opposed to specific predictions about upcoming input. We addressed this question by embedding semantically associated prime-target pairs within an experimental context that encouraged prediction to a greater or lesser degree. The proportion of related items was used to manipulate the predictive validity of the prime for the target while holding semantic association constant. A semantic category probe detection task was used to encourage semantic processing and to preclude the need for a motor response on the trials of interest. A larger N400 reduction to associated targets was observed in the high than the low relatedness proportion condition, consistent with the hypothesis that predictions about upcoming stimuli make a substantial contribution to the N400 effect. We also observed an earlier priming effect (205-240 ms) in the high proportion condition, which may reflect facilitation due to form-based prediction. In sum, the results suggest that predictability modulates N400 amplitude to a greater degree than the semantic content of the context.

A lexical basis for N400 context effects: Evidence from MEG

Within-subject MEG studies on the topography of N400 effects suggest that such effects reflect facilitated access to lexical information, and not difficulty integrating a word with its semantic context.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Diogo Almeida, Paul Hines, David Poeppel
Dates:

The electrophysiological response to words during the ‘N400’ time window (approximately 300–500 ms post-onset) is affected by the context in which the word is presented, but whether this effect reflects the impact of context on access of the stored lexical information itself or, alternatively, post-access integration processes is still an open question with substantive theoretical consequences. One challenge for integration accounts is that contexts that seem to require different levels of integration for incoming words (i.e., sentence frames vs. prime words) have similar effects on the N400 component measured in ERP. In this study we compare the effects of these different context types directly, in a within-subject design using MEG, which provides a better opportunity for identifying topographical differences between electrophysiological components, due to the minimal spatial distortion of the MEG signal. We find a qualitatively similar contextual effect for both sentence frame and prime-word contexts, although the effect is smaller in magnitude for shorter word prime contexts. Additionally, we observe no difference in response amplitude between sentence endings that are explicitly incongruent and target words that are simply part of an unrelated pair. These results suggest that the N400 effect does not reflect semantic integration difficulty. Rather, the data are consistent with an account in which N400 reduction reflects facilitated access of lexical information.

The Predictive Nature of Language Comprehension

Data from fMRI, MEG and EEG show that predictive processing plays a central role in language comprehension, for instance by facilitating lexical access, as indexed by N400 effects in ERP.

Linguistics

Contributor(s): Ellen Lau
Dates:

This dissertation explores the hypothesis that predictive processing—the access and construction of internal representations in advance of the external input that supports them—plays a central role in language comprehension. Linguistic input is frequently noisy, variable, and rapid, but it is also subject to numerous constraints. Predictive processing could be a particularly useful approach in language comprehension, as predictions based on the constraints imposed by the prior context could allow computation to be speeded and noisy input to be disambiguated. Decades of previous research have demonstrated that the broader sentence context has an effect on how new input is processed, but less progress has been made in determining the mechanisms underlying such contextual effects. This dissertation is aimed at advancing this second goal, by using both behavioral and neurophysiological methods to motivate predictive or top-down interpretations ofcontextual effects and to test particular hypotheses about the nature of the predictive mechanisms in question. The first part of the dissertation focuses on the lexical-semantic predictions made possible by word and sentence contexts. MEG and fMRI experiments, in conjunction with a meta-analysis of the previous neuroimaging literature, support the claim that an ERP effect classically observed in response to contextual manipulations—the N400 effect—reflects facilitation in processing due to lexical- semantic predictions, and that these predictions are realized at least in part through top-down changes in activity in left posterior middle temporal cortex, the cortical region thought to represent lexical-semantic information in long-term memory. The second part of the dissertation focuses on syntactic predictions. ERP and reaction time data suggest that the syntactic requirements of the prior context impacts processing of the current input very early, and that predicting the syntactic position in which the requirements can be fulfilled may allow the processor to avoid a retrieval mechanism that is prone to similarity-based interference errors. In sum, the results described here are consistent with the hypothesis that a significant amount of language comprehension takes place in advance of the external input, and suggest future avenues of investigation towards understanding the mechanisms that make this possible.

Lingering effects of disfluent material on the comprehension of garden path sentences

Do we experience garden path effects when a disfluent speaker replaces one verb with another (as in "chosen, uh, I mean selected") and only one of the two yields the garden-path ambiguity?

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Fernanda Ferreira
Dates:

In two experiments, we tested for lingering effects of verb replacement disfluencies on the processing of garden path sentences that exhibit the main verb/reduced relative (MV/RR) ambiguity. Participants heard sentences with revisions like "The little girl chosen, uh, selected for the role celebrated with her parents and friends." We found that the syntactic ambiguity associated with the reparandum verb involved in the disfluency (here "chosen") had an influence on later parsing: Garden path sentences that included such revisions were more likely to be judged grammatical if the reparandum verb was structurally unambiguous. Conversely, ambiguous non-garden path sentences were more likely to be judged ungrammatical if the structurally unambiguous disfluency verb was inconsistent with the final reading. Results support a model of disfluency processing in which the syntactic frame associated with the replacement verb ‘‘overlays’’ the previous verb’s structure rather than actively deleting the already-built tree.

Read More about Lingering effects of disfluent material on the comprehension of garden path sentences