MEG study of predictive speech processing in eLife journal
January 21, 2022
Evidence for parallel processing from Christian and Ellen, with Philip, Shohini, Aura and Jonathan Simon.
Magnetoencephalography (MEG) gives us neuroanatomical evidence for "Parallel processing in speech perception with local and global representations of linguistic context," according to a study now in the open access eLife journal, from former postdoc Christian Brodbeck (now Assistant Research Professor at UConn) with a UMD supergroup comprising Ellen Lau, Philip Resnik, current postdoc Shohini Bhattasali, Ellen's former RA Aura Cruz Heredia (now working towards a PhD at Penn), and Jonathan Simon from Electrical and Computer Engineering. The paper asks "how a predictive context is integrated with the bottom-up sensory input" in speech perception" and seeks evidence in "magnetoencephalography responses to continuous narrative speech." The results suggest that listeners make parallel use of two sorts of "predictive models" that have previously been regarded as exclusive alternatives, reconciling earlier disagreements: not only "local, context-independent representations, which are quickly integrated with contextual constraints," but also "a single coherent, unified [global] interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations." The team also finds evidence for where in the brain these two representations are computed: "Neural source localization places the anatomical origins of the different predictive models in non-identical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models."