Skip to main content
Skip to main content

Neurolinguistics

Linguists often take their object of study to be mental representations. Neurolinguistics, or the cognitive neuroscience of language, measures brain activity to probe these representations. 

Neurophysiological techniques can give us more precise information about the time course of language processing or allow us to measure subtle perceptual distinctions without the need for an artificial task. We can also use these techniques to ask questions about the neural implementation of language itself. Where are phonemic, semantic and syntactic representations stored? What kind of neural code is used to concatenate smaller pieces into a larger structure? What is the wiring between areas that allows different types of information to contribute to disambiguation? And, are there brain structures that are innately designated for language? Much is still unknown about the measures themselves and therefore cognitive neuroscience studies of language can also contribute more broadly to developing a better understanding of techniques like MEG and fMRI.
 
Faculty and students at Maryland engage in many of these questions, often by examining a language other than English, when that language is better suited to addressing a problem of interest. The department was one of the first sites in the country to have a fully-staffed MEG (magnetoencephalography) facility devoted to research. By recording changes in the magnetic field around the head associated with brain activity, researchers at Maryland have gained significant insights into the processing of auditory, phonological, morphological and lexical-semantic information (e.g., using Turkish to demonstrate that some dimensions of vowel space are paralleled in the location of the early MEG response). The department also houses an EEG (electroencephalography) lab for recording ERPs (event-related potentials) on the scalp. ERP research in the department has examined many aspects of sentence comprehension, including the relative independence of syntactic and semantic processing (in Spanish and Chinese) and differential predictors of tense marking (in Hindi), and there is growing interest in using ERP measures to test computational models of linguistic knowledge. Maryland researchers also have access to a third major non-invasive cognitive neuroscience technique at the Maryland Neuroimaging Center, with state-of-the-art MRI/fMRI facilities. This center opens the door for multimodal imaging research that can combine the temporal precision of EEG/MEG with the spatial specificity of fMRI to provide a more complete view of language processing in the brain.

Moving away from lexicalism in psycho- and neuro-linguistics

Moving away from lexicalism in psycho- and neuro-linguistics

Linguistics

Contributor(s): Ellen Lau, Alex Krauska
Dates:

In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

Read More about Moving away from lexicalism in psycho- and neuro-linguistics

A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Is processing subject-relative clauses easier even in an ergative language?

Linguistics

Contributor(s): Ellen Lau, Maria Polinsky
Non-ARHU Contributor(s): Nancy Clarke, Michaela Socolof, Rusudan Asatiani
Dates:

A fascinating descriptive property of human language processing whose explanation is still debated is that subject-gap relative clauses are easier to process than object-gap relative clauses, across a broad range of languages with different properties. However, recent work suggests that this generalization does not hold in Basque, an ergative language, and has motivated an alternative generalization in which the preference is for gaps in morphologically unmarked positions—subjects in nominative-accusative languages, and objects and intransitive subjects in ergative-absolutive languages. Here we examined whether this generalization extends to another ergative-absolutive language, Georgian. ERP and self-paced reading results show a large anterior negativity and slower reading times when a relative clause is disambiguated to an object relative vs a subject relative. These data thus suggest that in at least some ergative-absolutive languages, the classic descriptive generalization—that object relative clauses are more costly than subject relative clauses—still holds.

Read More about A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Parallel processing in speech perception with local and global representations of linguistic context

MEG evidence for parallel representation of local and global context in speech processing.

Linguistics

Contributor(s): Ellen Lau, Philip Resnik, Shohini Bhattasali
Non-ARHU Contributor(s): Christian Brodbeck, Aura Cruz Heredia, Jonathan Simon
Dates:

Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in non-identical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.

Read More about Parallel processing in speech perception with local and global representations of linguistic context

Primary Faculty

William Idsardi

Professor, Linguistics
Member, Maryland Language Science Center

Program in Neuroscience and Cognitive Science

CLaME: Max Planck • NYU Center for Language Music and Emotion

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376

Ellen Lau

Associate Professor, Linguistics
Member, Maryland Language Science Center

Co-Director, KIT-Maryland MEG Lab

Faculty, Program in Neuroscience and Cognitive Science

3416 E Marie Mount Hall
College Park MD, 20742

Colin Phillips

Professor, Distinguished Scholar-Teacher, Linguistics
Member, Maryland Language Science Center

Director, Language Science Center

1413 F Marie Mount Hall
College Park MD, 20742

(301) 405-3082

Secondary Faculty

Valentine Hacquard

Professor, Linguistics
Affliliate Professor, Philosophy
Member, Maryland Language Science Center

1401 F Marie Mount Hall
College Park MD, 20742

(301) 405-4935