Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Visual perception supports 4-place event representations: A case study of TRADING

Can adults visually represent a trading as a single event with four participants?

Linguistics

Contributor(s): Alexander Williams, Jeffrey Lidz
Non-ARHU Contributor(s):

Ekaterina Khylstova (UCLA), Laurel Perkins (UCLA)

Dates:

Events of social exchange, such as givings and tradings, are uniquely prevalent in human societies and cognitively privileged even at early stages of development. Such events may be represented as having 3 or even 4 participants. To do so in visual working memory would be at the limit of the system, which throughout development can track only 3 to 4 items. Using a case study of trading, we ask (i) whether adults can track all four participants in a trading scene, and (ii) whether they do so by chunking the scene into two giving events, each with 3 participants, to avoid placing the visual working memory system at its limit. We find that adults represent this scene under a 4-participant concept, and do not view the trade as two sequential giving events. We discuss further implications for event perception and verb learning in development.

Read More about Visual perception supports 4-place event representations: A case study of TRADING

Language Discrimination May Not Rely on Rhythm: A Computational Study

Challenging the relationship between rhythm and language discrimination in infancy.

Linguistics

Contributor(s): Leslie Ruolan Li, Naomi Feldman
Non-ARHU Contributor(s):

Abi Aboelata (UMD), Thomas Schatz (Marseilles)

Dates:

It has long been assumed that infants’ ability to discriminate between languages stems from their sensitivity to speech rhythm, i.e., organized temporal structure of vowels and consonants in a language. However, the relationship between speech rhythm and language discrimination has not been directly demonstrated. Here, we use computational modeling and train models of speech perception with and without access to information about rhythm. We test these models on language discrimination, and find that access to rhythm does not affect the success of the model in replicating infant language discrimination results. Our findings challenge the relationship between rhythm and language discrimination,

Read More about Language Discrimination May Not Rely on Rhythm: A Computational Study

Being pragmatic about syntactic bootstrapping

Syntactic and pragmatic cues to the meanings of modal and attitude verbs.

Linguistics

Author/Lead: Valentine Hacquard
Dates:

Words have meanings vastly undetermined by the contexts in which they occur. Their acquisition therefore presents formidable problems of induction. Lila Gleitman and colleagues have advocated for one part of a solution: indirect evidence for a word’s meaning may come from its syntactic distribution, via SYNTACTIC BOOTSTRAPPING. But while formal theories argue for principled links between meaning and syntax, actual syntactic evidence about meaning is noisy and highly abstract. This paper examines the role that syntactic bootstrapping can play in learning modal and attitude verb meanings, for which the physical context is especially uninformative. I argue that abstract syntactic classifications are useful to the child, but that something further is both necessary and available. I examine how pragmatic and syntactic cues can combine in mutually constraining ways to help learners infer attitude meanings, but need to be supplemented by semantic information from the lexical context in the case of modals.

Read More about Being pragmatic about syntactic bootstrapping

Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task

More evidence that "every" but not "each" evokes ensemble representations.

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s):

Tyler Knowlton *21, Justin Halberda, 

Dates:

Though each and every are both distributive universal quantifiers, a common theme in linguistic and psycholinguistic investigations into them has been that each is somehow more individualistic than every. We offer a novel explanation for this generalization: each has a first-order meaning which serves as an internalized instruction to cognition to build a thought that calls for representing the (restricted) domain as a series of individuals; by contrast, every has a second-order meaning which serves as an instruction to build a thought that calls for grouping the domain. In support of this view, we show that these distinct meanings invite the use of distinct verification strategies, using a novel paradigm. In two experiments, participants who had been asked to verify sentences like each/every circle is green were subsequently given a change detection task. Those who evaluated each-sentences were better able to detect the change, suggesting they encoded the individual circles' colors to a greater degree. Taken together with past work demonstrating that participants recall group properties after evaluating sentences with every better than after evaluating sentences with each, these results support the hypothesis that each and every call for treating the individuals that constitute their domain differently: as independent individuals (each) or as members of an ensemble collection (every). We situate our findings within a conception of linguistic meanings as instructions for thought building, on which the format of the resulting thought has consequences for how meanings interface with non-linguistic cognition.

Read More about Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task

Psycholinguistic evidence for restricted quantification

Determiners express restricted quantifiers and not relations between sets.

Linguistics, Philosophy

Contributor(s): Jeffrey Lidz, Alexander Williams, Paul Pietroski
Non-ARHU Contributor(s):

Tyler Knowlton *21, Justin Halberda (JHU)

Dates:
Publisher: Springer

Quantificational determiners are often said to be devices for expressing relations. For example, the meaning of every is standardly described as the inclusion relation, with a sentence like every frog is green meaning roughly that the green things include the frogs. Here, we consider an older, non-relational alternative: determiners are tools for creating restricted quantifiers. On this view, determiners specify how many elements of a restricted domain (e.g., the frogs) satisfy a given condition (e.g., being green). One important difference concerns how the determiner treats its two grammatical arguments. On the relational view, the arguments are on a logical par as independent terms that specify the two relata. But on the restricted view, the arguments play distinct logical roles: specifying the limited domain versus supplying an additional condition on domain entities. We present psycholinguistic evidence suggesting that the restricted view better describes what speakers know when they know the meaning of a determiner. In particular, we find that when asked to evaluate sentences of the form every F is G, participants mentally group the Fs but not the Gs. Moreover, participants forego representing the group defined by the intersection of F and G. This tells against the idea that speakers understand every F is G as implying that the Fs bear relation (e.g., inclusion) to a second group.

Read More about Psycholinguistic evidence for restricted quantification

Moving away from lexicalism in psycho- and neuro-linguistics

Moving away from lexicalism in psycho- and neuro-linguistics

Linguistics

Contributor(s): Ellen Lau, Alex Krauska
Dates:

In standard models of language production or comprehension, the elements which are retrieved from memory and combined into a syntactic structure are “lemmas” or “lexical items.” Such models implicitly take a “lexicalist” approach, which assumes that lexical items store meaning, syntax, and form together, that syntactic and lexical processes are distinct, and that syntactic structure does not extend below the word level. Across the last several decades, linguistic research examining a typologically diverse set of languages has provided strong evidence against this approach. These findings suggest that syntactic processes apply both above and below the “word” level, and that both meaning and form are partially determined by the syntactic context. This has significant implications for psychological and neurological models of language processing as well as for the way that we understand different types of aphasia and other language disorders. As a consequence of the lexicalist assumptions of these models, many kinds of sentences that speakers produce and comprehend—in a variety of languages, including English—are challenging for them to account for. Here we focus on language production as a case study. In order to move away from lexicalism in psycho- and neuro-linguistics, it is not enough to simply update the syntactic representations of words or phrases; the processing algorithms involved in language production are constrained by the lexicalist representations that they operate on, and thus also need to be reimagined. We provide an overview of the arguments against lexicalism, discuss how lexicalist assumptions are represented in models of language production, and examine the types of phenomena that they struggle to account for as a consequence. We also outline what a non-lexicalist alternative might look like, as a model that does not rely on a lemma representation, but instead represents that knowledge as separate mappings between (a) meaning and syntax and (b) syntax and form, with a single integrated stage for the retrieval and assembly of syntactic structure. By moving away from lexicalist assumptions, this kind of model provides better cross-linguistic coverage and aligns better with contemporary syntactic theory.

Read More about Moving away from lexicalism in psycho- and neuro-linguistics

A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

Is processing subject-relative clauses easier even in an ergative language?

Linguistics

Contributor(s): Ellen Lau, Maria Polinsky
Non-ARHU Contributor(s):

Nancy Clarke, Michaela Socolof, Rusudan Asatiani

Dates:

A fascinating descriptive property of human language processing whose explanation is still debated is that subject-gap relative clauses are easier to process than object-gap relative clauses, across a broad range of languages with different properties. However, recent work suggests that this generalization does not hold in Basque, an ergative language, and has motivated an alternative generalization in which the preference is for gaps in morphologically unmarked positions—subjects in nominative-accusative languages, and objects and intransitive subjects in ergative-absolutive languages. Here we examined whether this generalization extends to another ergative-absolutive language, Georgian. ERP and self-paced reading results show a large anterior negativity and slower reading times when a relative clause is disambiguated to an object relative vs a subject relative. These data thus suggest that in at least some ergative-absolutive languages, the classic descriptive generalization—that object relative clauses are more costly than subject relative clauses—still holds.

Read More about A subject relative clause preference in a split-ergative language: ERP evidence from Georgian

The Binding Problem 2.0: Beyond Perceptual Features

On the problem of binding to object indices, beyond perceptual features.

Linguistics

Contributor(s): Ellen Lau, Xinchi Yu
Dates:

The “binding problem” has been a central question in vision science for some 30 years: When encoding multiple objects or maintaining them in working memory, how are we able to represent the correspondence between a specific feature and its corresponding object correctly? In this letter we argue that the boundaries of this research program in fact extend far beyond vision, and we call for coordinated pursuit across the broader cognitive science community of this central question for cognition, which we dub “Binding Problem 2.0”.

Read More about The Binding Problem 2.0: Beyond Perceptual Features

Parser-Grammar Transparency and the Development of Syntactic Dependencies

Learning a grammar is sufficient for learning to parse.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

A fundamental question in psycholinguistics concerns how grammatical structure contributes to real-time sentence parsing and understanding. While many argue that grammatical structure is only loosely related to on-line parsing, others hold the view that the two are tightly linked. Here, I use the incremental growth of grammatical structure in developmental time to demonstrate that as new grammatical knowledge becomes available to children, they use that knowledge in their incremental parsing decisions. Given the tight link between the acquisition of new knowledge and the use of that knowledge in recognizing sentence structure, I argue in favor of a tight link between grammatical structure and parsing mechanics.

Read More about Parser-Grammar Transparency and the Development of Syntactic Dependencies

On substance and Substance-Free Phonology: Where we are at and where we are going

On the abstractness of phonology.

Linguistics

Contributor(s): Alex Chabot
Dates:

In this introduction [to this special issue of the journal, on substance-free phonology], I will briefly trace the development of features in phonological theory, with particular emphasis on their relationship to phonetic substance. I will show that substance-free phonology is, in some respects, the resurrection of a concept that was fundamental to early structuralist views of features as symbolic markers, whose phonological role eclipses any superficial correlates to articulatory or acoustic objects. In the process, I will highlight some of the principal questions that this epistemological tack raises, and how the articles in this volume contribute to our understanding of those questions

Read More about On substance and Substance-Free Phonology: Where we are at and where we are going