Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Syntactic bootstrapping attitude verbs despite impoverished morphosyntax

Even when acquiring Chinese, children assign belief semantics to verbs whose objects morphosyntactically resemble declarative main clauses, and desire semantics to others.

Linguistics

Contributor(s): Valentine Hacquard, Jeffrey Lidz
Non-ARHU Contributor(s):

Nick Huang *19, Aaron Steven White *15, Chia-Hsuan Liao *20

Dates:

Attitude verbs like think and want describe mental states (belief and desire) that lack reliable physical correlates that could help children learn their meanings. Nevertheless, children succeed in doing so. For this reason, attitude verbs have been a parade case for syntactic bootstrapping. We assess a recent syntactic bootstrapping hypothesis, in which children assign belief semantics to verbs whose complement clauses morphosyntactically resemble the declarative main clauses of their language, while assigning desire semantics to verbs whose complement clauses do not. This hypothesis, building on the cross-linguistic generalization that belief complements have the morphosyntactic hallmarks of declarative main clauses, has been elaborated for languages with relatively rich morphosyntax. This article looks at Mandarin Chinese, whose null arguments and impoverished morphology mean that the differences necessary for syntactic bootstrapping might be much harder to detect. Our corpus analysis, however, shows that Mandarin belief complements have the profile of declarative main clauses, while desire complements do not. We also show that a computational implementation of this hypothesis can learn the right semantic contrasts between Mandarin and English belief and desire verbs, using morphosyntactic features in child-ambient speech. These results provide novel cross-linguistic support for this syntactic bootstrapping hypothesis.

Read More about Syntactic bootstrapping attitude verbs despite impoverished morphosyntax

Language-Internal Reanalysis of Clitic Placement in Heritage Grammars Reduces the Cost of Computation: Evidence from Bulgarian

Heritage speakers of Bulgarian reanalyze the principles of clitic placement.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s):

Tanya Ivanova-Sullivan (New Mexico), Irinia A. Sekerina (CUNY), Davood Tofighi (New Mexico)

Dates:

The study offers novel evidence on the grammar and processing of clitic placement in heritage languages. Building on earlier findings of divergent clitic placement in heritage European Portuguese and Serbian, this study extends this line of inquiry to Bulgarian, a language where clitic placement is subject to strong prosodic constraints. We found that, in heritage Bulgarian, clitic placement is processed and rated differently than in the baseline, and we asked whether such clitic misplacement results from the transfer from the dominant language or follows from language-internal reanalysis. We used a self-paced listening task and an aural acceptability rating task with 13 English-dominant, highly proficient heritage speakers and 22 monolingual speakers of Bulgarian. Heritage speakers of Bulgarian process and rate the grammatical proclitic and ungrammatical enclitic clitic positions as equally acceptable, and we contend that this pattern is due to language-internal reanalysis. We suggest that the trigger for such reanalysis is the overgeneralization of the prosodic Strong Start Constraint from the left edge of the clause to any position in the sentence

Read More about Language-Internal Reanalysis of Clitic Placement in Heritage Grammars Reduces the Cost of Computation: Evidence from Bulgarian

All Focus is Contrastive: On Polarity (Verum) Focus, Answer Focus, Contrastive Focus and Givenness

A general theory of focus and givenness.

Linguistics

Contributor(s): Daniel Goodhue
Dates:

I develop a general theory of focus and givenness that can account for truly contrastive focus, and for polarity focus, including data that are sometimes set apart under the label “verum focus”. I show that polarity focus creates challenges for classic theories of focus (e.g. Rooth 1992, a.o.) that can be dealt with by requiring that all focus marking is truly contrastive, and that givenness deaccenting imposes its own distinct requirement on prominence shifts. To enforce true contrast, I employ innocent exclusion (Fox 2007), which I suggest may impose a general filter on what counts as a valid alternative. A key, novel feature of my account is that focal targets are split into two kinds, those that are contextually supported and those that are constructed ad hoc, and that the presence of a contextually supported target can block the ability to construct an ad hoc target. This enables a novel explanation of the data motivating true contrast, and enables polarity focus to be brought into the fold of a unified and truly contrastive theory of focus. I then compare the account to theories of verum focus that make use of non-focus-based VERUM operators, and make the argument that the focus account is more parsimonious and has better empirical coverage.

Read More about All Focus is Contrastive: On Polarity (Verum) Focus, Answer Focus, Contrastive Focus and Givenness

The Power of Ignoring: Filtering Input for Argument Structure Acquisition

How to avoid learning from misleading data by identifying a filter without knowing what to filter.

Linguistics

Contributor(s): Naomi Feldman, Jeffrey Lidz
Non-ARHU Contributor(s):

Laurel Perkins *19 (UCLA)

Dates:

Learning in any domain depends on how the data for learning are represented. In the domain of language acquisition, children’s representations of the speech they hear determine what generalizations they can draw about their target grammar. But these input representations change over development asa function of children’s developing linguistic knowledge, and may be incomplete or inaccurate when children lack the knowledge to parse their input veridically. How does learning succeed in the face of potentially misleading data? We address this issue using the case study of “non-basic” clauses inverb learning. A young infant hearing What did Amy fix? might not recognize that what stands in for the direct object of fix, and might think that fix is occurring without a direct object. We follow a previous proposal that children might filter nonbasic clauses out of the data for learning verb argument structure, but offer a new approach. Instead of assuming that children identify the data to filter ina dvance, we demonstrate computationally that it is possible for learners to infer a filter on their input without knowing which clauses are nonbasic. We instantiate a learner that considers the possibility that it misparses some of the sentences it hears, and learns to filter out those parsing errors in order to correctly infer transitivity for the majority of 50 frequent verbs in child-directed speech. Our learner offers a novel solution to the problem of learning from immature input representations: Learners maybe able to avoid drawing faulty inferences from misleading data by identifying a filter on their input,without knowing in advance what needs to be filtered.

Read More about The Power of Ignoring: Filtering Input for Argument Structure Acquisition

Parallel processing in speech perception with local and global representations of linguistic context

MEG evidence for parallel representation of local and global context in speech processing.

Linguistics

Contributor(s): Ellen Lau, Philip Resnik, Shohini Bhattasali
Non-ARHU Contributor(s):

Christian Brodbeck, Aura Cruz Heredia, Jonathan Simon

Dates:

Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in non-identical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.

Read More about Parallel processing in speech perception with local and global representations of linguistic context

On the Acquisition of Attitude Verbs

On the acquisition of attitude verbs.

Linguistics

Contributor(s): Jeffrey Lidz, Valentine Hacquard
Dates:

Attitude verbs, such as think, want, and know, describe internal mental states that leave few cues as to their meanings in the physical world. Consequently, their acquisition requires learners to draw from indirect evidence stemming from the linguistic and conversational contexts in which they occur. This provides us a unique opportunity to probe the linguistic and cognitive abilities that children deploy in acquiring these words. Through a few case studies, we show how children make use of syntactic and pragmatic cues to figure out attitude verb meanings and how their successes, and even their mistakes, reveal remarkable conceptual, linguistic, and pragmatic sophistication.

Using surprisal and fMRI to map the neural bases of broad and local contextual prediction during natural language comprehension

Modeling the influence of local and topical context on processing via an analysis of fMRI time courses during naturalistic listening.

Linguistics

Contributor(s): Philip Resnik, Shohini Bhattasali
Dates:

Context guides comprehenders’ expectations during language processing, and information theoretic surprisal is commonly used as an index of cognitive processing effort. However, prior work using surprisal has considered only within-sentence context, using n-grams, neural language models, or syntactic structure as conditioning context. In this paper, we extend the surprisal approach to use broader topical context, investigating the influence of local and topical context on processing via an analysis of fMRI time courses collected during naturalistic listening. Lexical surprisal calculated from ngram and LSTM language models is used to capture effects of local context; to capture the effects of broader context a new metric based on topic models, topical surprisal, is introduced. We identify distinct patterns of neural activation for lexical surprisal and topical surprisal. These differing neuro-anatomical correlates suggest that local and broad contextual cues during sentence processing recruit different brain regions and that those regions of the language network functionally contribute to processing different dimensions of contextual information during comprehension. More generally, our approach adds to a growing literature using methods from computational linguistics to operationalize and test hypotheses about neuro-cognitive mechanisms in sentence processing.

Read More about Using surprisal and fMRI to map the neural bases of broad and local contextual prediction during natural language comprehension

Eighteen-month-old infants represent nonlocal syntactic dependencies

Evidence that 18-month olds already represent filler-gap dependencies.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Laurel Perkins *19 (UCLA)

Dates:

The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants’ interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., *What did the chef burn the pizza). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.

Read More about Eighteen-month-old infants represent nonlocal syntactic dependencies

The mental representation of universal quantifers

On the psychological representations that give the meanings of "every" and "each".

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s):

Tyler Knowlton *21, Justin Halberda (Hopkins)

Dates:
Publisher: Springer
PhD student Tyler Knowlton smiling at the camera, surrounded by six members of his PhD committee, one joining remotely through an iPad

A sentence like every circle is blue might be understood in terms of individuals and their properties (e.g., for each thing that is a circle, it is blue) or in terms of a relation between groups (e.g., the blue things include the circles). Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with eachevery, or all combined with a predicate (e.g., big dot). We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions.

Read More about The mental representation of universal quantifers

Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

Auditory pattern recognition in nonhuman animals shares important characteristics with human phonology, but not human syntax.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s):

Jonathan Rawsi (Stony Brook), Jeffrey Heinz (Stony Brook)

Dates:
Publisher: American Academy for the Advancement of Sciences

We comment on the technical interpretation of the study of Watson et al. and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals’ ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.

Read More about Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”