Skip to main content
Skip to main content

[GLLaM] Shohini Bhattasali - Exploring the neural correlates of context in sentence processing

A portrait of postdoctoral fellow Shohini Bhattasali, outside and smiling at the camera

[GLLaM] Shohini Bhattasali - Exploring the neural correlates of context in sentence processing

Linguistics Friday, October 23, 2020 12:15 pm - 1:30 pm Online

Context guides comprehenders’ expectations during language processing. In this talk, I will discuss the roles of local context and broad context during natural language comprehension. Information-theoretic surprisal (Hale, 2001; Levy, 2008) can be utilized to capture both types of contextual cues. Surprisal can be interpreted as “the degree to which the actually perceived word deviates from expectation” (Lopopolo et al., 2017) and the expectation can be based on information from the immediately preceding words or previous sentences and paragraphs. 
     Using surprisal, we can examine how use of local and broader context are reflected in processing using an analysis of fMRI time courses collected during naturalistic listening. While previous work has probed the neural correlates of lexical and syntactic surprisal using computational measures (Brennan et al. 2016), to our knowledge modelling with surprisal has not been extended beyond sentences to include broader context. Lexical surprisal estimated using an LSTM (long short-term memory) language model is used to represent local context (van Schijndel & Linzen, 2018). For broader topical context, we use a new metric, topical surprisal (Bhattasali & Resnik, 2020), estimated using an LDA topic model. Our results illustrate that various regions of the language network functionally contribute to processing different dimensions of contextual information.

Add to Calendar 10/23/20 12:15 PM 10/23/20 1:30 PM America/New_York [GLLaM] Shohini Bhattasali - Exploring the neural correlates of context in sentence processing

Context guides comprehenders’ expectations during language processing. In this talk, I will discuss the roles of local context and broad context during natural language comprehension. Information-theoretic surprisal (Hale, 2001; Levy, 2008) can be utilized to capture both types of contextual cues. Surprisal can be interpreted as “the degree to which the actually perceived word deviates from expectation” (Lopopolo et al., 2017) and the expectation can be based on information from the immediately preceding words or previous sentences and paragraphs. 
     Using surprisal, we can examine how use of local and broader context are reflected in processing using an analysis of fMRI time courses collected during naturalistic listening. While previous work has probed the neural correlates of lexical and syntactic surprisal using computational measures (Brennan et al. 2016), to our knowledge modelling with surprisal has not been extended beyond sentences to include broader context. Lexical surprisal estimated using an LSTM (long short-term memory) language model is used to represent local context (van Schijndel & Linzen, 2018). For broader topical context, we use a new metric, topical surprisal (Bhattasali & Resnik, 2020), estimated using an LDA topic model. Our results illustrate that various regions of the language network functionally contribute to processing different dimensions of contextual information.

Organization

Website

Link to GLLaM page