Skip to main content
Skip to main content

Psycholinguistics

Successful language processing requires speaker and hearer to dynamically create richly structured representations within a few hundred milliseconds of encountering each new word.

Our group asks how this feat is achieved, whether it is achieved in the same fashion across languages with varying word order and morphological markers, what are the possible neural encoding mechanisms for richly structured information and how the dynamics of language processing differ in adult native speakers, child and adult language learners, or in atypical learners.
 
Some distinctive features of the Maryland group include its expertise in cross-language research (e.g., recent studies on Japanese, Hindi, Mandarin, Portuguese, Basque, Russian, American Sign Language and Spanish); its use of diverse tools to investigate language-related processes (reading time, eye-movement measures, EEG and MEG measures of millisecond-grain brain activity and fMRI measures of brain localization); and its work involving neuro-computational modeling of language processing and studies of developmental and atypical populations. The rich network of connections between investigators make it feasible to try to seamlessly align insights from formal grammars with findings from psycho/neurolinguistics and computational neuroscience, often in ways that we could not have imagined a few years ago.
 
Research in psycholinguistics at Maryland is not pursued as a separate enterprise, but rather is closely integrated into all research areas of the department and the broader language science community. Weekly research group meetings primarily feature student presentations of in-progress research and typically attract 20-30 people.

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376

Ellen Lau

Associate Professor, Linguistics

3416 E Marie Mount Hall
College Park MD, 20742

Jeffrey Lidz

Professor, Linguistics

1413 Marie Mount Hall
College Park MD, 20742

(301) 405-8220

Colin Phillips

Professor, Distinguished Scholar-Teacher, Linguistics

1413 F Marie Mount Hall
College Park MD, 20742

(301) 405-3082

Andrea Zukowski

Research Scientist, Linguistics

1413 Marie Mount Hall
College Park MD, 20742

(301) 405-5388

Secondary Faculty

Valentine Hacquard

Professor, Linguistics
Affliliate Professor, Philosophy

1401 F Marie Mount Hall
College Park MD, 20742

(301) 405-4935

Maria Polinsky

Professor, Linguistics
Affiliate Faculty, Latin American and Caribbean Studies Center

1417 A Marie Mount Hall
College Park MD, 20742

Alexander Williams

Associate Professor, Linguistics
Associate Professor, Philosophy

1401 D Marie Mount Hall
College Park MD, 20742

(301) 405-1607

Psycholinguistic evidence for restricted quantification

Determiners express restricted quantifiers and not relations between sets.

Linguistics, Philosophy

Contributor(s): Jeffrey Lidz, Alexander Williams, Paul Pietroski
Non-ARHU Contributor(s): Tyler Knowlton *21, Justin Halberda (JHU)
Dates:

Quantificational determiners are often said to be devices for expressing relations. For example, the meaning of every is standardly described as the inclusion relation, with a sentence like every frog is green meaning roughly that the green things include the frogs. Here, we consider an older, non-relational alternative: determiners are tools for creating restricted quantifiers. On this view, determiners specify how many elements of a restricted domain (e.g., the frogs) satisfy a given condition (e.g., being green). One important difference concerns how the determiner treats its two grammatical arguments. On the relational view, the arguments are on a logical par as independent terms that specify the two relata. But on the restricted view, the arguments play distinct logical roles: specifying the limited domain versus supplying an additional condition on domain entities. We present psycholinguistic evidence suggesting that the restricted view better describes what speakers know when they know the meaning of a determiner. In particular, we find that when asked to evaluate sentences of the form every F is G, participants mentally group the Fs but not the Gs. Moreover, participants forego representing the group defined by the intersection of F and G. This tells against the idea that speakers understand every F is G as implying that the Fs bear relation (e.g., inclusion) to a second group.

Read More about Psycholinguistic evidence for restricted quantification

Moving away from lexicalism in psycho- and neuro-linguistics

Moving away from lexicalism in psycho- and neuro-linguistics

Linguistics

Contributor(s): Ellen Lau, Alex Krauska
Dates:

In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

Read More about Moving away from lexicalism in psycho- and neuro-linguistics

Lexicalization in the developing parser

Children make syntactic predictions based on the syntactic distributions of specific verbs, but do not assume that the patterns can be generalized.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Aaron Steven White *15 (University of Rochester)
Dates:

We use children's noun learning as a probe into the nature of their syntactic prediction mechanism and the statistical knowledge on which that prediction mechanism is based. We focus on verb-based predictions, considering two possibilities: children's syntactic predictions might rely on distributional knowledge about specific verbs–i.e. they might be lexicalized–or they might rely on distributional knowledge that is general to all verbs. In an intermodal preferential looking experiment, we establish that verb-based predictions are lexicalized: children encode the syntactic distributions of specific verbs and use those distributions to make predictions, but they do not assume that these can be assumed of verbs in general.

Read More about Lexicalization in the developing parser