Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Using surprisal and fMRI to map the neural bases of broad and local contextual prediction during natural language comprehension

Modeling the influence of local and topical context on processing via an analysis of fMRI time courses during naturalistic listening.

Linguistics

Contributor(s): Philip Resnik, Shohini Bhattasali
Dates:

Context guides comprehenders’ expectations during language processing, and information theoretic surprisal is commonly used as an index of cognitive processing effort. However, prior work using surprisal has considered only within-sentence context, using n-grams, neural language models, or syntactic structure as conditioning context. In this paper, we extend the surprisal approach to use broader topical context, investigating the influence of local and topical context on processing via an analysis of fMRI time courses collected during naturalistic listening. Lexical surprisal calculated from ngram and LSTM language models is used to capture effects of local context; to capture the effects of broader context a new metric based on topic models, topical surprisal, is introduced. We identify distinct patterns of neural activation for lexical surprisal and topical surprisal. These differing neuro-anatomical correlates suggest that local and broad contextual cues during sentence processing recruit different brain regions and that those regions of the language network functionally contribute to processing different dimensions of contextual information during comprehension. More generally, our approach adds to a growing literature using methods from computational linguistics to operationalize and test hypotheses about neuro-cognitive mechanisms in sentence processing.

Read More about Using surprisal and fMRI to map the neural bases of broad and local contextual prediction during natural language comprehension

Eighteen-month-old infants represent nonlocal syntactic dependencies

Evidence that 18-month olds already represent filler-gap dependencies.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Laurel Perkins *19 (UCLA)
Dates:

The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants’ interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., *What did the chef burn the pizza). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.

Read More about Eighteen-month-old infants represent nonlocal syntactic dependencies

The mental representation of universal quantifers

On the psychological representations that give the meanings of "every" and "each".

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s): Tyler Knowlton *21, Justin Halberda (Hopkins)
Dates:
Publisher: Springer
PhD student Tyler Knowlton smiling at the camera, surrounded by six members of his PhD committee, one joining remotely through an iPad

A sentence like every circle is blue might be understood in terms of individuals and their properties (e.g., for each thing that is a circle, it is blue) or in terms of a relation between groups (e.g., the blue things include the circles). Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with eachevery, or all combined with a predicate (e.g., big dot). We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions.

Read More about The mental representation of universal quantifers

Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

Auditory pattern recognition in nonhuman animals shares important characteristics with human phonology, but not human syntax.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Jonathan Rawsi (Stony Brook), Jeffrey Heinz (Stony Brook)
Dates:
Publisher: American Academy for the Advancement of Sciences

We comment on the technical interpretation of the study of Watson et al. and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals’ ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.

Read More about Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

Automated Topic Model Evaluation Broken? The Incoherence of Coherence

Questioning automatic coherence evaluations for neural topic models.

Linguistics

Contributor(s): Philip Resnik
Non-ARHU Contributor(s): Alexander Hoyle, Pranav Goel, Denis Peskov, Andrew Hian-Cheong, Jordan Boyd-Graber
Dates:

Topic model evaluation, like evaluation of other unsupervised methods, can be contentious. However, the field has coalesced around automated estimates of topic coherence, which rely on the frequency of word co-occurrences in a reference corpus. Recent models relying on neural components surpass classical topic models according to these metrics. At the same time, unlike classical models, the practice of neural topic model evaluation suffers from a validation gap: automatic coherence for neural models has not been validated using human experimentation. In addition, as we show via a meta-analysis of topic modeling literature, there is a substantial standardization gap in the use of automated topic modeling benchmarks. We address both the standardization gap and the validation gap. Using two of the most widely used topic model evaluation datasets, we assess a dominant classical model and two state-of-the-art neural models in a systematic, clearly documented, reproducible way. We use automatic coherence along with the two most widely accepted human judgment tasks, namely, topic rating and word intrusion. Automated evaluation will declare one model significantly different from another when corresponding human evaluations do not, calling into question the validity of fully automatic evaluations independent of human judgments.

Read More about Automated Topic Model Evaluation Broken? The Incoherence of Coherence

Informativity, topicality, and speech cost: comparing models of speakers’ choices of referring expressions

Is use of a pronoun motivated by topicality or efficiency?

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s): Naho Orita *15 (Tokyo University of Science)
Dates:

This study formalizes and compares two major hypotheses in speakers’ choices of referring expressions: the topicality model that chooses a form based on the topicality of the referent, and the rational model that chooses a form based on the informativity of the form and its speech cost. Simulations suggest that both the topicality of the referent and the informativity of the word are important to consider in speakers’ choices of reference forms, while a speech cost metric that prefers shorter forms may not be.

Read More about Informativity, topicality, and speech cost: comparing models of speakers’ choices of referring expressions

Linguistic meanings as cognitive instructions

"More" and "most" do not encode the same sorts of comparison.

Linguistics

Contributor(s): Tyler Knowlton, Paul Pietroski, Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter *10 (UCLA), Alexis Wellwood *14 (USC), Darko Odic (University of British Columbia), Justin Halberda (Johns Hopkins University),
Dates:

Natural languages like English connect pronunciations with meanings. Linguistic pronunciations can be described in ways that relate them to our motor system (e.g., to the movement of our lips and tongue). But how do linguistic meanings relate to our nonlinguistic cognitive systems? As a case study, we defend an explicit proposal about the meaning of most by comparing it to the closely related more: whereas more expresses a comparison between two independent subsets, most expresses a subset–superset comparison. Six experiments with adults and children demonstrate that these subtle differences between their meanings influence how participants organize and interrogate their visual world. In otherwise identical situations, changing the word from most to more affects preferences for picture–sentence matching (experiments 1–2), scene creation (experiments 3–4), memory for visual features (experiment 5), and accuracy on speeded truth judgments (experiment 6). These effects support the idea that the meanings of more and most are mental representations that provide detailed instructions to conceptual systems.

Read More about Linguistic meanings as cognitive instructions

Social inference may guide early lexical learning

Assessment of knowledgeability and group membership influences infant word learning.

Linguistics

Contributor(s): Naomi Feldman, William Idsardi
Non-ARHU Contributor(s): Alayo Tripp *19
Dates:

We incorporate social reasoning about groups of informants into a model of word learning, and show that the model accounts for infant looking behavior in tasks of both word learning and recognition. Simulation 1 models an experiment where 16-month-old infants saw familiar objects labeled either correctly or incorrectly, by either adults or audio talkers. Simulation 2 reinterprets puzzling data from the Switch task, an audiovisual habituation procedure wherein infants are tested on familiarized associations between novel objects and labels. Eight-month-olds outperform 14-month-olds on the Switch task when required to distinguish labels that are minimal pairs (e.g., “buk” and “puk”), but 14-month-olds' performance is improved by habituation stimuli featuring multiple talkers. Our modeling results support the hypothesis that beliefs about knowledgeability and group membership guide infant looking behavior in both tasks. These results show that social and linguistic development interact in non-trivial ways, and that social categorization findings in developmental psychology could have substantial implications for understanding linguistic development in realistic settings where talkers vary according to observable features correlated with social groupings, including linguistic, ethnic, and gendered groups.

Read More about Social inference may guide early lexical learning

Japanese children's knowledge of the locality of "zibun" and "kare"

Initial errors in the acquisition of the Japanese local- or long-distance anaphor "zibun."

Linguistics

Contributor(s): Jeffrey Lidz, Naomi Feldman
Non-ARHU Contributor(s): Naho Orita *15, Hajime Ono *06
Dates:

Although the Japanese reflexive zibun can be bound both locally and across clause boundaries, the third-person pronoun kare cannot take a local antecedent. These are properties that children need to learn about their language, but we show that the direct evidence of the binding possibilities of zibun is sparse and the evidence of kare is absent in speech to children, leading us to ask about children’s knowledge. We show that children, unlike adults, incorrectly reject the long-distance antecedent for zibun, and while being able to access this antecedent for a non-local pronoun kare, they consistently reject the local antecedent for this pronoun. These results suggest that children’s lack of matrix readings for zibun is not due to their understanding of discourse context but the properties of their language understanding.

Read More about Japanese children's knowledge of the locality of "zibun" and "kare"

Debate Reaction Ideal Points: Political Ideology Measurement Using Real-Time Reaction Data

Estimating an individual's ideology from their real-time reactions to presidential debates.

Linguistics

Contributor(s): Philip Resnik
Non-ARHU Contributor(s): Daniel Argyle, Lisa P. Argyle, Vlad Eidelman
Dates:

Ideal point models have become a powerful tool for defining and measuring the ideology of many kinds of political actors, including legislators, judges, campaign donors, and members of the general public. We extend the application of ideal point models to the public using a novel data source: real-time reactions to statements by candidates in the 2012 presidential debates. Using these reactions as inputs to an ideal point model, we estimate individual-level ideology and evaluate the quality of the measure. Debate reaction ideal points provide a method for estimating a continuous, individual-level measure of ideology that avoids survey response biases, provides better estimates for moderates and the politically unengaged, and reflects the content of salient political discourse relevant to viewers’ attitudes and vote choices. As expected, we find that debate reaction ideal points are more extreme among respondents who strongly identify with a political party, but retain substantial within-party variation. Ideal points are also more extreme among respondents who are more politically interested. Using topical subsets of the debate statements, we find that ideal points in the sample are more moderate for foreign policy than for economic or domestic policy.

Read More about Debate Reaction Ideal Points: Political Ideology Measurement Using Real-Time Reaction Data