Skip to main content
Skip to main content

Psycholinguistics

Successful language processing requires speaker and hearer to dynamically create richly structured representations within a few hundred milliseconds of encountering each new word.

Our group asks how this feat is achieved, whether it is achieved in the same fashion across languages with varying word order and morphological markers, what are the possible neural encoding mechanisms for richly structured information and how the dynamics of language processing differ in adult native speakers, child and adult language learners, or in atypical learners.
 
Some distinctive features of the Maryland group include its expertise in cross-language research (e.g., recent studies on Japanese, Hindi, Mandarin, Portuguese, Basque, Russian, American Sign Language and Spanish); its use of diverse tools to investigate language-related processes (reading time, eye-movement measures, EEG and MEG measures of millisecond-grain brain activity and fMRI measures of brain localization); and its work involving neuro-computational modeling of language processing and studies of developmental and atypical populations. The rich network of connections between investigators make it feasible to try to seamlessly align insights from formal grammars with findings from psycho/neurolinguistics and computational neuroscience, often in ways that we could not have imagined a few years ago.
 
Research in psycholinguistics at Maryland is not pursued as a separate enterprise, but rather is closely integrated into all research areas of the department and the broader language science community. Weekly research group meetings primarily feature student presentations of in-progress research and typically attract 20-30 people.

Primary Faculty

Naomi Feldman

Associate Professor, Linguistics

1413 A Marie Mount Hall
College Park MD, 20742

(301) 405-5800

William Idsardi

Professor, Linguistics

1401 A Marie Mount Hall
College Park MD, 20742

(301) 405-8376

Ellen Lau

Associate Professor, Linguistics

3416 E Marie Mount Hall
College Park MD, 20742

Jeffrey Lidz

Professor, Linguistics

1413 Marie Mount Hall
College Park MD, 20742

(301) 405-8220

Colin Phillips

Professor, Linguistics

1413F Marie Mount Hall
College Park MD, 20742

(301) 405-3082

Andrea Zukowski

Research Scientist, Linguistics

1413 Marie Mount Hall
College Park MD, 20742

(301) 405-5388

Secondary Faculty

Valentine Hacquard

Professor, Linguistics
Affliliate Professor, Philosophy

1401 F Marie Mount Hall
College Park MD, 20742

(301) 405-4935

Maria Polinsky

Professor, Linguistics

1417A Marie Mount Hall
College Park MD, 20742

Alexander Williams

Associate Professor, Linguistics
Associate Professor, Philosophy

1401 D Marie Mount Hall
College Park MD, 20742

(301) 405-1607
Sorry, no events currently present.
View all events

The mental representation of universal quantifers

On the psychological representations that give the meanings of "every" and "each".

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s): Tyler Knowlton *21, Justin Halberda (Hopkins)
Dates:
PhD student Tyler Knowlton smiling at the camera, surrounded by six members of his PhD committee, one joining remotely through an iPad

A sentence like every circle is blue might be understood in terms of individuals and their properties (e.g., for each thing that is a circle, it is blue) or in terms of a relation between groups (e.g., the blue things include the circles). Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with eachevery, or all combined with a predicate (e.g., big dot). We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions.

Read More about The mental representation of universal quantifers

Linguistic meanings as cognitive instructions

"More" and "most" do not encode the same sorts of comparison.

Linguistics

Contributor(s): Tyler Knowlton, Paul Pietroski, Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter *10 (UCLA), Alexis Wellwood *14 (USC), Darko Odic (University of British Columbia), Justin Halberda (Johns Hopkins University),
Dates:

Natural languages like English connect pronunciations with meanings. Linguistic pronunciations can be described in ways that relate them to our motor system (e.g., to the movement of our lips and tongue). But how do linguistic meanings relate to our nonlinguistic cognitive systems? As a case study, we defend an explicit proposal about the meaning of most by comparing it to the closely related more: whereas more expresses a comparison between two independent subsets, most expresses a subset–superset comparison. Six experiments with adults and children demonstrate that these subtle differences between their meanings influence how participants organize and interrogate their visual world. In otherwise identical situations, changing the word from most to more affects preferences for picture–sentence matching (experiments 1–2), scene creation (experiments 3–4), memory for visual features (experiment 5), and accuracy on speeded truth judgments (experiment 6). These effects support the idea that the meanings of more and most are mental representations that provide detailed instructions to conceptual systems.

Read More about Linguistic meanings as cognitive instructions

Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution

How does online comprehension of adjunct control ("before eating") compare to resolution of pronominal anaphora ("before he ate")?

Linguistics, Philosophy

Contributor(s): Alexander Williams, Ellen Lau
Non-ARHU Contributor(s): Jeffrey J. Green *18, Michael McCourt *21
Dates:

The comprehension of anaphoric relations may be guided not only by discourse, but also syntactic information. In the literature on online processing, however, the focus has been on audible pronouns and descriptions whose reference is resolved mainly on the former. This paper examines one relation that both lacks overt exponence, and relies almost exclusively on syntax for its resolution: adjunct control, or the dependency between the null subject of a non-finite adjunct and its antecedent in sentences such as Mickey talked to Minnie before ___ eating. Using visual-world eyetracking, we compare the timecourse of interpreting this null subject and overt pronouns (Mickey talked to Minnie before he ate). We show that when control structures are highly frequent, listeners are just as quick to resolve reference in either case. When control structures are less frequent, reference resolution based on structural information still occurs upon hearing the non-finite verb, but more slowly, especially when unaided by structural and referential predictions. This may be due to increased difficulty in recognizing that a referential dependency is necessary. These results indicate that in at least some contexts, referential expressions whose resolution depends on very different sources of information can be resolved approximately equally rapidly, and that the speed of interpretation is largely independent of whether or not the dependency is cued by an overt referring expression.

Read More about Processing adjunct control: Evidence on the use of structural information and prediction in reference resolution