Skip to main content
Skip to main content

Jeffrey Lidz

Photo of Jeffrey Lidz

Professor, Linguistics

(301) 405-8220

1413 Marie Mount Hall
Get Directions

Research Expertise

Language Acquisition
Psycholinguistics
Syntax

Publications

Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task

More evidence that "every" but not "each" evokes ensemble representations.

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s): Tyler Knowlton *21, Justin Halberda, 
Dates:

Though each and every are both distributive universal quantifiers, a common theme in linguistic and psycholinguistic investigations into them has been that each is somehow more individualistic than every. We offer a novel explanation for this generalization: each has a first-order meaning which serves as an internalized instruction to cognition to build a thought that calls for representing the (restricted) domain as a series of individuals; by contrast, every has a second-order meaning which serves as an instruction to build a thought that calls for grouping the domain. In support of this view, we show that these distinct meanings invite the use of distinct verification strategies, using a novel paradigm. In two experiments, participants who had been asked to verify sentences like each/every circle is green were subsequently given a change detection task. Those who evaluated each-sentences were better able to detect the change, suggesting they encoded the individual circles' colors to a greater degree. Taken together with past work demonstrating that participants recall group properties after evaluating sentences with every better than after evaluating sentences with each, these results support the hypothesis that each and every call for treating the individuals that constitute their domain differently: as independent individuals (each) or as members of an ensemble collection (every). We situate our findings within a conception of linguistic meanings as instructions for thought building, on which the format of the resulting thought has consequences for how meanings interface with non-linguistic cognition.

Read More about Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task

Psycholinguistic evidence for restricted quantification

Determiners express restricted quantifiers and not relations between sets.

Linguistics | Philosophy

Contributor(s): Jeffrey Lidz, Alexander Williams, Paul Pietroski
Non-ARHU Contributor(s): Tyler Knowlton *21, Justin Halberda (JHU)
Dates:

Quantificational determiners are often said to be devices for expressing relations. For example, the meaning of every is standardly described as the inclusion relation, with a sentence like every frog is green meaning roughly that the green things include the frogs. Here, we consider an older, non-relational alternative: determiners are tools for creating restricted quantifiers. On this view, determiners specify how many elements of a restricted domain (e.g., the frogs) satisfy a given condition (e.g., being green). One important difference concerns how the determiner treats its two grammatical arguments. On the relational view, the arguments are on a logical par as independent terms that specify the two relata. But on the restricted view, the arguments play distinct logical roles: specifying the limited domain versus supplying an additional condition on domain entities. We present psycholinguistic evidence suggesting that the restricted view better describes what speakers know when they know the meaning of a determiner. In particular, we find that when asked to evaluate sentences of the form every F is G, participants mentally group the Fs but not the Gs. Moreover, participants forego representing the group defined by the intersection of F and G. This tells against the idea that speakers understand every F is G as implying that the Fs bear relation (e.g., inclusion) to a second group.

Read More about Psycholinguistic evidence for restricted quantification

Parser-Grammar Transparency and the Development of Syntactic Dependencies

Learning a grammar is sufficient for learning to parse.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

A fundamental question in psycholinguistics concerns how grammatical structure contributes to real-time sentence parsing and understanding. While many argue that grammatical structure is only loosely related to on-line parsing, others hold the view that the two are tightly linked. Here, I use the incremental growth of grammatical structure in developmental time to demonstrate that as new grammatical knowledge becomes available to children, they use that knowledge in their incremental parsing decisions. Given the tight link between the acquisition of new knowledge and the use of that knowledge in recognizing sentence structure, I argue in favor of a tight link between grammatical structure and parsing mechanics.

Read More about Parser-Grammar Transparency and the Development of Syntactic Dependencies

Lexicalization in the developing parser

Children make syntactic predictions based on the syntactic distributions of specific verbs, but do not assume that the patterns can be generalized.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Aaron Steven White *15 (University of Rochester)
Dates:

We use children's noun learning as a probe into the nature of their syntactic prediction mechanism and the statistical knowledge on which that prediction mechanism is based. We focus on verb-based predictions, considering two possibilities: children's syntactic predictions might rely on distributional knowledge about specific verbs–i.e. they might be lexicalized–or they might rely on distributional knowledge that is general to all verbs. In an intermodal preferential looking experiment, we establish that verb-based predictions are lexicalized: children encode the syntactic distributions of specific verbs and use those distributions to make predictions, but they do not assume that these can be assumed of verbs in general.

Read More about Lexicalization in the developing parser

Children's use of syntax in word learning

How children use syntax as evidence for word meaning.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:

This chapter investigates the role that syntax plays in guiding the acquisition of word meaning. It reviews data that reveal how children can use the syntactic distribution of a word as evidence for its meaning and discusses the principles of grammar that license such inferences. We delineate the role of thematic linking generalizations in the acquisition of action verbs, arguing that children use specific links between subject and agent and between object and patient to guide initial verb learning. In the domain of attitude verbs, we show that children’s knowledge of abstract links between subclasses of attitude verbs and their syntactic distribution enable learners to identify the meanings of their initial attitude verbs, such as think and want. Finally, we show that syntactic bootstrapping effects are not limited to verb learning but extend across the lexicon.

Read More about Children's use of syntax in word learning

Syntactic bootstrapping attitude verbs despite impoverished morphosyntax

Even when acquiring Chinese, children assign belief semantics to verbs whose objects morphosyntactically resemble declarative main clauses, and desire semantics to others.

Linguistics

Contributor(s): Valentine Hacquard, Jeffrey Lidz
Non-ARHU Contributor(s): Nick Huang *19, Aaron Steven White *15, Chia-Hsuan Liao *20
Dates:

Attitude verbs like think and want describe mental states (belief and desire) that lack reliable physical correlates that could help children learn their meanings. Nevertheless, children succeed in doing so. For this reason, attitude verbs have been a parade case for syntactic bootstrapping. We assess a recent syntactic bootstrapping hypothesis, in which children assign belief semantics to verbs whose complement clauses morphosyntactically resemble the declarative main clauses of their language, while assigning desire semantics to verbs whose complement clauses do not. This hypothesis, building on the cross-linguistic generalization that belief complements have the morphosyntactic hallmarks of declarative main clauses, has been elaborated for languages with relatively rich morphosyntax. This article looks at Mandarin Chinese, whose null arguments and impoverished morphology mean that the differences necessary for syntactic bootstrapping might be much harder to detect. Our corpus analysis, however, shows that Mandarin belief complements have the profile of declarative main clauses, while desire complements do not. We also show that a computational implementation of this hypothesis can learn the right semantic contrasts between Mandarin and English belief and desire verbs, using morphosyntactic features in child-ambient speech. These results provide novel cross-linguistic support for this syntactic bootstrapping hypothesis.

Read More about Syntactic bootstrapping attitude verbs despite impoverished morphosyntax

On the Acquisition of Attitude Verbs

On the acquisition of attitude verbs.

Linguistics

Contributor(s): Jeffrey Lidz, Valentine Hacquard
Dates:

Attitude verbs, such as think, want, and know, describe internal mental states that leave few cues as to their meanings in the physical world. Consequently, their acquisition requires learners to draw from indirect evidence stemming from the linguistic and conversational contexts in which they occur. This provides us a unique opportunity to probe the linguistic and cognitive abilities that children deploy in acquiring these words. Through a few case studies, we show how children make use of syntactic and pragmatic cues to figure out attitude verb meanings and how their successes, and even their mistakes, reveal remarkable conceptual, linguistic, and pragmatic sophistication.

The Power of Ignoring: Filtering Input for Argument Structure Acquisition

How to avoid learning from misleading data by identifying a filter without knowing what to filter.

Linguistics

Contributor(s): Naomi Feldman, Jeffrey Lidz
Non-ARHU Contributor(s): Laurel Perkins *19 (UCLA)
Dates:

Learning in any domain depends on how the data for learning are represented. In the domain of language acquisition, children’s representations of the speech they hear determine what generalizations they can draw about their target grammar. But these input representations change over development asa function of children’s developing linguistic knowledge, and may be incomplete or inaccurate when children lack the knowledge to parse their input veridically. How does learning succeed in the face of potentially misleading data? We address this issue using the case study of “non-basic” clauses inverb learning. A young infant hearing What did Amy fix? might not recognize that what stands in for the direct object of fix, and might think that fix is occurring without a direct object. We follow a previous proposal that children might filter nonbasic clauses out of the data for learning verb argument structure, but offer a new approach. Instead of assuming that children identify the data to filter ina dvance, we demonstrate computationally that it is possible for learners to infer a filter on their input without knowing which clauses are nonbasic. We instantiate a learner that considers the possibility that it misparses some of the sentences it hears, and learns to filter out those parsing errors in order to correctly infer transitivity for the majority of 50 frequent verbs in child-directed speech. Our learner offers a novel solution to the problem of learning from immature input representations: Learners maybe able to avoid drawing faulty inferences from misleading data by identifying a filter on their input,without knowing in advance what needs to be filtered.

Read More about The Power of Ignoring: Filtering Input for Argument Structure Acquisition

Eighteen-month-old infants represent nonlocal syntactic dependencies

Evidence that 18-month olds already represent filler-gap dependencies.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Laurel Perkins *19 (UCLA)
Dates:

The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants’ interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., *What did the chef burn the pizza). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.

Read More about Eighteen-month-old infants represent nonlocal syntactic dependencies

The mental representation of universal quantifers

On the psychological representations that give the meanings of "every" and "each".

Linguistics

Contributor(s): Jeffrey Lidz, Paul Pietroski
Non-ARHU Contributor(s): Tyler Knowlton *21, Justin Halberda (Hopkins)
Dates:
PhD student Tyler Knowlton smiling at the camera, surrounded by six members of his PhD committee, one joining remotely through an iPad

A sentence like every circle is blue might be understood in terms of individuals and their properties (e.g., for each thing that is a circle, it is blue) or in terms of a relation between groups (e.g., the blue things include the circles). Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with eachevery, or all combined with a predicate (e.g., big dot). We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions.

Read More about The mental representation of universal quantifers

Linguistic meanings as cognitive instructions

"More" and "most" do not encode the same sorts of comparison.

Linguistics

Contributor(s): Tyler Knowlton, Paul Pietroski, Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter *10 (UCLA), Alexis Wellwood *14 (USC), Darko Odic (University of British Columbia), Justin Halberda (Johns Hopkins University),
Dates:

Natural languages like English connect pronunciations with meanings. Linguistic pronunciations can be described in ways that relate them to our motor system (e.g., to the movement of our lips and tongue). But how do linguistic meanings relate to our nonlinguistic cognitive systems? As a case study, we defend an explicit proposal about the meaning of most by comparing it to the closely related more: whereas more expresses a comparison between two independent subsets, most expresses a subset–superset comparison. Six experiments with adults and children demonstrate that these subtle differences between their meanings influence how participants organize and interrogate their visual world. In otherwise identical situations, changing the word from most to more affects preferences for picture–sentence matching (experiments 1–2), scene creation (experiments 3–4), memory for visual features (experiment 5), and accuracy on speeded truth judgments (experiment 6). These effects support the idea that the meanings of more and most are mental representations that provide detailed instructions to conceptual systems.

Read More about Linguistic meanings as cognitive instructions

Japanese children's knowledge of the locality of "zibun" and "kare"

Initial errors in the acquisition of the Japanese local- or long-distance anaphor "zibun."

Linguistics

Contributor(s): Jeffrey Lidz, Naomi Feldman
Non-ARHU Contributor(s): Naho Orita *15, Hajime Ono *06
Dates:

Although the Japanese reflexive zibun can be bound both locally and across clause boundaries, the third-person pronoun kare cannot take a local antecedent. These are properties that children need to learn about their language, but we show that the direct evidence of the binding possibilities of zibun is sparse and the evidence of kare is absent in speech to children, leading us to ask about children’s knowledge. We show that children, unlike adults, incorrectly reject the long-distance antecedent for zibun, and while being able to access this antecedent for a non-local pronoun kare, they consistently reject the local antecedent for this pronoun. These results suggest that children’s lack of matrix readings for zibun is not due to their understanding of discourse context but the properties of their language understanding.

Read More about Japanese children's knowledge of the locality of "zibun" and "kare"

Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis

Experimental evidence supports an analysis of Null Object constructions in Korean as instances of object ellipsis.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Chung-hye Han, Kyeong-min Kim, Keir Moulton
Dates:

Null object (NO) constructions in Korean and Japanese have receiveddifferent accounts: as (a) argument ellipsis (Oku 1998, S. Kim 1999, Saito 2007, Sakamoto 2015), (b) VP-ellipsis after verb raising (Otani and Whitman 1991, Funakoshi 2016), or (c) instances of base-generated pro (Park 1997, Hoji 1998, 2003). We report results from two experiments supporting the argument ellipsis analysis for Korean. Experiment 1 builds on K.-M. Kim and Han’s (2016) finding of interspeaker variation in whether the pronoun ku can be bound by a quantifier. Results showed that a speaker’s acceptance of quantifier-bound ku positively correlates with acceptance of sloppy readings in NO sentences. We argue that an ellipsis account, in which the NO site contains internal structure hosting the pronoun, accounts for this correlation. Experiment 2, testing the recovery of adverbials in NO sentences, showed that only the object (not the adverb) can be recovered in the NO site, excluding the possibility of VP-ellipsis. Taken together, our findings suggest that NOs result from argument ellipsis in Korean.

Read More about Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis

Hope for syntactic bootstrapping

Some mental state verbs take a finite clause as their object, while others take an infinitive, and the two groups differ reliably in meaning. Remarkably, children can use this correlation to narrow down the meaning of an unfamiliar verb.

Linguistics

Contributor(s): Valentine Hacquard, Jeffrey Lidz
Non-ARHU Contributor(s): Kaitlyn Harrigan (*15)
Dates:

We explore children’s use of syntactic distribution in the acquisition of attitude verbs, such as think, want, and hope. Because attitude verbs refer to concepts that are opaque to observation but have syntactic distributions predictive of semantic properties, we hypothesize that syntax may serve as an important cue to learning their meanings. Using a novel methodology, we replicate previous literature showing an asymmetry between acquisition of think and want, and we additionally demonstrate that interpretation of a less frequent attitude verb, hope, patterns with type of syntactic complement. This supports the view that children treat syntactic frame as informative about an attitude verb’s meaning

Read More about Hope for syntactic bootstrapping

Filler-gap dependency comprehension at 15 months: The role of vocabulary

New evidence from preferential looking suggests that 15 month olds can correctly understand wh-questions and relative clauses under certain experimental conditions, but perhaps only by noticing that a verb is missing an expected dependent.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Laurel Perkins (*19)
Dates:

15-month-olds behave as if they comprehend filler-gap dependencies such as wh-questions and relative clauses. On one hypothesis, this success does not reflect adult-like representations but rather a “gap-driven” interpretation heuristic based on verb knowledge. Infants who know that feed is transitive may notice that a predicted direct object is missing in Which monkey did the frog feed __? and then search the display for the animal that got fed. This gap-driven account predicts that 15-month-olds will perform accurately only if they know enough verbs to deploy this interpretation heuristic; therefore, performance should depend on vocabulary. We test this prediction in a preferential looking task and find corroborating evidence: Only 15-month-olds with higher vocabulary behave as if they comprehend wh-questions and relative clauses. This result reproduces the previous finding that 15-month-olds can identify the right answer for wh-questions and relative clauses under certain experimental contexts, and is moreover consistent with the gap-driven heuristic account for this behavior.

Read More about Filler-gap dependency comprehension at 15 months: The role of vocabulary

Learning, memory and syntactic bootstrapping: A meditation

Do children learning words rely on memories for where they have heard the word before? Jeff Lidz argues memory of syntactic context plays a larger role than memory for referential context.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:
Lila Gleitman’s body of work on word learning raises an apparent paradox. Whereas work on syntactic bootstrapping depends on learners retaining information about the set of distributional contexts that a word occurs in, work on identifying a word’s referent suggests that learners do not retain information about the set of extralinguistic contexts that a word occurs in. I argue that this asymmetry derives from the architecture of the language faculty. Learners expect words with similar meanings to have similar distributions, and so learning depends on a memory for syntactic environments. The referential context in which a word is used is less constrained and hence contributes less to the memories that drive word learning.

Read More about Learning, memory and syntactic bootstrapping: A meditation

Prosody and Function Words Cue the Acquisition of Word Meanings in 18-Month-Old Infants

18-month-old infants use prosody and function words to recover the syntactic structure of a sentence, which in turn constrains the possible meanings of novel words the sentence contains.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Angela Xiaoxue He (*15), Alex de Carvalho, Anne Christophe
Dates:

Language acquisition presents a formidable task for infants, for whom word learning is a crucial yet challenging step. Syntax (the rules for combining words into sentences) has been robustly shown to be a cue to word meaning. But how can infants access syntactic information when they are still acquiring the meanings of words? We investigated the contribution of two cues that may help infants break into the syntax and give a boost to their lexical acquisition: phrasal prosody (speech melody) and function words, both of which are accessible early in life and correlate with syntactic structure in the world’s languages. We show that 18-month-old infants use prosody and function words to recover sentences’ syntactic structure, which in turn constrains the possible meanings of novel words: Participants (N = 48 in each of two experiments) interpreted a novel word as referring to either an object or an action, given its position within the prosodic-syntactic structure of sentences.

Read More about Prosody and Function Words Cue the Acquisition of Word Meanings in 18-Month-Old Infants

The importance of input representations

Learning from data is not incompatible with approaches that attribute rich initial linguistic knowledge to the learner. On the contrary, such approaches must still account for how knowledge guides learners in using their data to infer a grammar.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Laurel Perkins (*19)
Dates:

Language learners use the data in their environment in order to infer the grammatical system that produced that data. Yang (2018) makes the important point that this process requires integrating learners’ experiences with their current linguistic knowledge. A complete theory of language acquisition must explain how learners leverage their developing knowledge in order to draw further inferences on the basis of new data. As Yang and others have argued, the fact that input plays a role in learning is orthogonal to the question of whether language acquisition is primarily knowledge-driven or data-driven (J. A. Fodor, 1966; Lidz & Gagliardi, 2015; Lightfoot, 1991; Wexler & Culicover, 1980). Learning from data is not incompatible with approaches that attribute rich initial linguistic knowledge to the learner. On the contrary, such approaches must still account for how knowledge guides learners in using their data to infer a grammar.

The explanatory power of linguistic theory

Jeff Lidz details evidence for the Predicate Internal Subject Hypothesis, and shows how its abstractness supports the "considerable sophistication" that the Chomskyan tradition imputes to the child learner.

Linguistics

Contributor(s): Jeffrey Lidz
Dates:
Jeff Lidz details evidence for the Predicate Internal Subject Hypothesis, and shows how its abstractness supports the "considerable sophistication" that the Chomskyan tradition imputes to the child learner.

The scope of children’s scope: Representation, parsing and learning

What do young children know about quantifier scope?

Linguistics

Contributor(s): Jeffrey Lidz
Dates:
This paper reviews some developmental psycholinguistic literature on quantifier scope. I demonstrate how scope has been used as a valuable probe into children’s grammatical representations, the nature of children’s on-line understanding mechanisms, and the role that experience plays in language acquisition. First, children’s interpretations of certain scopally ambiguous sentences reveals that their syntactic representations are hierarchical, with the c-command relation playing a fundamental role in explaining interpretive biases. Second, children’s scope errors are explained by incremental parsing and interpretation mechanisms, paired with difficulty revising initial interpretations. Third, a priming manipulation reveals that children’s clauses, like those of adults, are represented with predicate-internal subjects. Finally, data on scope variation in Korean reveals that in the absence of disambiguating evidence, parameter setting is an essentially random process. Together, these discoveries reveal how the developmental psycholinguistics of scope has proved a valuable tool for probing issues of grammar, parsing and learning

Read More about The scope of children’s scope: Representation, parsing and learning

Similarity-based interference and the acquisition of adjunct control

Kids sometimes make errors in interpreting the understood subject of adjunct predicate, like "before leaving." Julie Gerard argues that these errors may result, not from a non-adultlike grammar, but from mistakes in sentence processing.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Juliana Gerard
Dates:
Previous research on the acquisition of adjunct control has observed non-adultlike behavior for sentences like “John bumped Mary after tripping on the sidewalk.” While adults only allow a subject control interpretation for these sentences (that John tripped on the sidewalk), preschool-aged children have been reported to allow a much wider range of interpretations. A number of different tasks have been used with the aim of identifying a grammatical source of children’s errors. In this paper, we consider the role of extragrammatical factors. In two comprehension experiments, we demonstrate that error rates go up when the similarity increases between an antecedent and a linearly intervening noun phrase, first with similarity in gender, and next with similarity in number marking. This suggests that difficulties with adjunct control are to be explained (at least in part) by the sentence processing mechanisms that underlie similarity-based interference in adults

Read More about Similarity-based interference and the acquisition of adjunct control

The role of incremental parsing in syntactically conditioned word learning

The girl is tapping with the tig. If you don't know what "tig" means, you'll look to what the girl is using to tap. And so will even very young children.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Aaron Steven White, Rebecca Baier
Dates:
In a series of three experiments, we use children’s noun learning as a probe into their syntactic knowledge as well as their ability to deploy this knowledge, investigating how the predictions children make about upcoming syntactic structure change as their knowledge changes. In the first two experiments, we show that children display a developmental change in their ability to use a noun’s syntactic environment as a cue to its meaning. We argue that this pattern arises from children’s reliance on their knowledge of verbs’ subcategorization frame frequencies to guide parsing, coupled with an inability to revise incremental parsing decisions. We show that this analysis is consistent with the syntactic distributions in child-directed speech. In the third experiment, we show that the change arises from predictions based on verbs’ subcategorization frame frequencies.

Read More about The role of incremental parsing in syntactically conditioned word learning

Verb learning in 14- and 18-month-old English-learning infants

Ordinarily, verbs in English label events while nouns do not. Angela He and Jeff Lidz show that even 18-month-olds can use this correlation to infer the meanings of novel words, given the understanding that "is _ ing" is a context for verbs.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Angela He
Dates:
The present study investigates English-learning infants’ early understanding of the link between the grammatical category verb and the conceptual category event, and their ability to recruit morphosyntactic information online to learn novel verb meanings. We report two experiments using an infant-controlled Habituation-Switch Paradigm. In Experiment 1, we habituated 14- and 18-month-old infants with two scenes each labeled by a novel intransitive verb embedded in the frame “is ___ing”: a penguin-spinning scene paired with “it’s doking”, a penguin-cartwheeling scene paired with “it’s pratching”. At test, infants in both age groups dishabituated when the scene-sentence pairings got switched (e.g., penguin-spinning—“it’s pratching”). This finding is consistent with two explanations: (1) infants were able to link verbs to event concepts (as opposed to other concepts, e.g., objects) and (2) infants were simply tracking the surface-level mapping between scenes and sentences, and it was scene-sentence mismatch that elicited dishabituation, not their knowledge of verb-event link. In Experiment 2, we investigated these two possibilities, and found that 14-month-olds were sensitive to any type of mismatch, whereas 18-month-olds dishabituated only to a mismatch that involved a change in word meaning. Together, these results provide evidence that 18-month-old English-learning infants are able to learn novel verbs by recruiting morphosyntactic cues for verb categorization and use the verb-event link to constrain their search space of possible verb meanings.

Read More about Verb learning in 14- and 18-month-old English-learning infants

Language Acquisition

A handbook chapter on first language acquisition, aimed at the independent contributions of experience, domain-specific biases, priorknowledge and extralinguistic cognition in shaping how a grammar grows inside the mind of a child.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Laurel Perkins
Dates:
A handbook chapter on first language acquisition, aimed at the independent contributions of experience, domain-specific biases, prior knowledge and extralinguistic cognition in shaping how a grammar grows inside the mind of a child.

On how verification tasks are related to verification procedures: A reply to Kotek et al.

How do we mentally represent the meaning of "most"? Here Tim Hunter clarifies the goals of Jeff Lidz and Paul Pietroski's project to answer this question, in respsonse to misunderstandings.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter, Darko Odic, Alexis Wellwood
Dates:
Kotek et al. (Nat Lang Semant 23: 119–156, 2015) argue on the basis of novel experimental evidence that sentences like ‘Most of the dots are blue’ are ambiguous, i.e. have two distinct truth conditions. Kotek et al. furthermore suggest that when their results are taken together with those of earlier work by Lidz et al. (Nat Lang Semant 19: 227–256, 2011), the overall picture that emerges casts doubt on the conclusions that Lidz et al. drew from their earlier results. We disagree with this characterization of the relationship between the two studies. Our main aim in this reply is to clarify the relationship as we see it. In our view, Kotek et al.’s central claims are simply logically independent of those of Lidz et al.: the former concern which truth condition(s) a certain kind of sentence has, while the latter concern the procedures that speakers choose for the purposes of determining whether a particular truth condition is satisfied in various scenes. The appearance of a conflict between the two studies stems from inattention to the distinction between questions about truth conditions and questions about verification procedures.

Read More about On how verification tasks are related to verification procedures: A reply to Kotek et al.

The Oxford Handbook of Developmental Linguistics

An esssential compendium of contemporary research in language acquisition.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): William Snyder, Joe Pater
Dates:
1. Introduction, Jeffrey Lidz, William Snyder, and Joe Pater#### Part I: The Acquisition of Sound Systems 2. The Acquisition of Phonological Inventories, Ewan Dunbar and William Idsardi 3. Phonotactics and Syllable Structure in Infant Speech Perception, Tania S. Zamuner and Viktor Kharlamov 4. Phonological Processes in Children's Production: Convergence with and Divergence from Adult Grammars, Heather Goad 5. Prosodic Phenomena: Stress, Tone, and Intonation, Mitsuhiko Ota ####Part II: The Acquisition of Morphology 6. Compound Word Formation, William Snyder 7. Morpho-phonological Acquisition, Anne-Michelle Tessier 8. Processing Continuous Speech in Infancy: From Major Prosodic Units to Isolated Word Forms, Louise Goyet, Severine Millotte, Anne Christophe, and Thierry Nazzi ####Part III: The Acquisition of Syntax 9. Argument Structure, Joshua Viau and Ann Bunger 10. Voice Alternations (Active, Passive, Middle), M. Teresa Guasti 11. On the Acquisition of Prepositions and Particles, Koji Sugisaki 12. A-Movement in Language Development, Misha Becker and Susannah Kirby 13. The Acquisition of Complements, Jill de Villiers and Tom Roeper 14. Acquisition of Questions, Rosalind Thornton 15. Root Infinitives in Child Language and the Structure of the Clause, John Grinstead 16. Mood Alternations, Kamil Ud Deen 17. Null Subjects, Virginia Valian 18. Case and Agreement, Paul Hagstrom 19. Acquiring Possessives, Theo Marinis ####Part IV: The Acquisition of Semantics 20. Acquisition of Comparative and Degree Constructions, Kristen Syrett 21. Quantification in Child Language, Jeffrey Lidz 22. The Acquisition of Binding and Coreference, Sergio Baauw 23. Logical Connectives, Takuya Goro 24. The Expression of Genericity in Child Language, Ana T. Perez-Laroux 25. Lexical and Grammatical Aspect, Angeliek van Hout 26. Scalar Implicature, Anna Papafragou and Dimitrios Skordos ####Part V: Theories of Learning 27. Computational Theories of Learning and Developmental Psycholinguistics, Jeffrey Heinz 28. Statistical Learning, Inductive Bias, and Bayesian Inference in Language Acquisition, Lisa Pearl and Sharon Goldwater 29. Computational Approaches to Parameter Setting in Generative Linguistics, William Gregory Sakas 30. Learning with Violable Constraints, Gaja Jarosz ####Part VI: Atypical Populations 31. Language Development in Children with Developmental Disorders, Andrea Zukowski 32. The Genetics of Spoken Language, Jennifer Ganger 33. Phonological Disorders: Theoretical and Experimental Findings, Daniel A. Dinnsen, Jessica A. Barlow, and Judith A. Gierut

NPI licensing and beyond: Children's knowledge of the semantics of "any"

Visitor Lyn Tieu and mentor Jeff Lidz investigate preschooler's understanding of negative polarity items like "any".

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Lyn Tieu
Dates:
This paper presents a study of preschool-aged children’s knowledge of the semantics of the negative polarity item (NPI) any. NPIs like any differ in distribution from non-polarity-sensitive indefinites like a: any is restricted to downward-entailing linguistic environments (Fauconnier 1975, 1979; Ladusaw 1979). But any also differs from plain indefinites in its semantic contribution; any can quantify over wider domains of quantification than plain indefinites. In fact, on certain accounts of NPI licensing, it is precisely the semantics of any that derives its restricted distribution (Kadmon & Landman 1993; Krifka 1995; Chierchia 2006, 2013). While previous acquisition studies have investigated children’s knowledge of the distributional constraints on any (O’Leary & Crain 1994; Thornton 1995; Xiang, Conroy, Lidz & Zukowski 2006; Tieu 2010), no previous study has targeted children’s knowledge of the semantics of the NPI. To address this gap in the existing literature, we present an experiment conducted with English-speaking adults and 4–5-year-old children, in which we compare their interpretation of sentences containing any with their interpretation of sentences containing the plain indefinite a and the bare plural. When presented with multiple domain alternatives, one of which was made more salient than the others, both adults and children restricted the domain of quantification for the plain indefinites to the salient subdomain. In the case of any, however, the adults and most of the children that we tested interpreted any as quantifying over the largest domain in the context. We discuss our findings in light of theories of NPI licensing that posit a connection between the distribution of NPIs and their underlying semantics, and conclude by raising further questions about the learnability of NPIs.

Read More about NPI licensing and beyond: Children's knowledge of the semantics of "any"

Discontinuous Development in the Acquisition of Filler-Gap Dependencies: Evidence from 15- and 20-Month-Olds

15-month-olds are able to understand relative clauses and wh-questions; but not by way of correctly representing their grammar.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Annie Gagliardi, Tara M. Mease
Dates:
This article investigates infant comprehension of filler-gap dependencies. Three experiments probe 15- and 20-month-olds’ comprehension of two filler-gap dependencies: wh-questions and relative clauses. Experiment 1 shows that both age groups appear to comprehend wh-questions. Experiment 2 shows that only the younger infants appear to comprehend relative clauses, while Experiment 3 shows that when parsing demands are reduced, older children can comprehend them as well. We argue that this discontinuous pattern follows from an offset in the development of grammatical knowledge and the deployment mechanisms for using that knowledge in real time. Fifteen-month-olds, we argue, lack the grammatical representation of filler-gap dependencies but are able to achieve correct performance in the task by using argument structure information. Twenty-month-olds do represent filler-gap dependencies but are inefficient in deploying those representations in real time.

Read More about Discontinuous Development in the Acquisition of Filler-Gap Dependencies: Evidence from 15- and 20-Month-Olds

Endogenous sources of variation in language acquisition

Jeff Lidz and collaborators investigate inter-speaker variation in the grammar of quantifier scope in Korean.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Chung-hye Han, Julien Musolino
Dates:
A fundamental question in the study of human language acquisition centers around apportioning explanatory force between the experience of the learner and the core knowledge that allows learners to represent that experience. We provide a previously unidentified kind of data identifying children’s contribution to language acquisition. We identify one aspect of grammar that varies unpredictably across a population of speakers of what is ostensibly a single language. We further demonstrate that the grammatical knowledge of parents and their children is independent. The combination of unpredictable variation and parent–child independence suggests that the relevant structural feature is supplied by each learner independent of experience with the language. This structural feature is abstract because it controls variation in more than one construction. The particular case we examine is the position of the verb in the clause structure of Korean. Because Korean is a head-final language, evidence for the syntactic position of the verb is both rare and indirect. We show that (i) Korean speakers exhibit substantial variability regarding this aspect of the grammar, (ii) this variability is attested between speakers but not within a speaker, (iii) this variability controls interpretation in two surface constructions, and (iv) it is independent in parents and children. According to our findings, when the exposure language is compatible with multiple grammars, learners acquire a single systematic grammar. Our observation that children and their parents vary independently suggests that the choice of grammar is driven in part by a process operating internal to individual learners.

Read More about Endogenous sources of variation in language acquisition

Syntactic and lexical inference in the acquisition of novel superlatives

Even four year olds are biased to think that determiners express relations between quantities, but lack the same bias for adjectives. How do they arrive at this bias?

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Alexis Wellwood, Annie Gagliardi
Dates:
Acquiring the correct meanings of words expressing quantities (seven, most) and qualities (red, spotty) present a challenge to learners. Understanding how children succeed at this requires understanding, not only of what kinds of data are available to them, but also the biases and expectations they bring to the learning task. The results of our word-learning task with 4-year-olds indicate that a “syntactic bootstrapping” hypothesis correctly predicts a bias toward quantity-based interpretations when a novel word appears in the syntactic position of a determiner but also leaves open the explanation of a bias towards quality-based interpretations when the same word is presented in the syntactic position of an adjective. We develop four computational models that differentially encode how lexical, conceptual, and perceptual factors could generate the latter bias. Simulation results suggest it results from a combination of lexical bias and perceptual encoding.

Read More about Syntactic and lexical inference in the acquisition of novel superlatives

Expanding our Reach and Theirs: When Linguists go to High School

A report on outreach to local schools by the community of language scientists at UMCP.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Yakov Kronrod
Dates:
In 2007, we began an outreach program in Linguistics with psychology students in a local majority–minority high school. In the years since, the initial collaboration has grown to include other schools and nurtured a culture of community engagement in the language sciences at the University of Maryland. The program has led to a number of benefits for both the public school students and the University researchers involved. Over the years, our efforts have developed into a multi-faceted outreach program targeting primary and secondary school as well as the public more broadly. Through our outreach, we attempt to take a modest step toward increasing public awareness and appreciation of the importance of language science, toward the integration of research into the school curriculum, and giving potential first-generation college students a taste of what they are capable of. In this article, we describe in detail our motivations and goals, the details of the activities, and where we can go from here.

Read More about Expanding our Reach and Theirs: When Linguists go to High School

How Nature Meets Nurture: Universal Grammar and Statistical Learning

Children acquire grammars on the basis of statistical information, interpreted through a system of linguistic representation that is substantially innate. Jeff Lidz and Annie Gagliardi propose a model of the process.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Annie Gagliardi
Dates:
Evidence of children’s sensitivity to statistical features of their input in language acquisition is often used to argue against learning mechanisms driven by innate knowledge. At the same time, evidence of children acquiring knowledge that is richer than the input supports arguments in favor of such mechanisms. This tension can be resolved by separating the inferential and deductive components of the language learning mechanism. Universal Grammar provides representations that support deductions about sentences that fall outside of experience. In addition, these representations define the evidence that learners use to infer a particular grammar. The input is compared with the expected evidence to drive statistical inference. In support of this model, we review evidence of (a) children’s sensitivity to the environment, (b) mismatches between input and intake, (c) the need for learning mechanisms beyond innate representations, and (d) the deductive consequences of children’s acquired syntactic representations.

Read More about How Nature Meets Nurture: Universal Grammar and Statistical Learning

Linking parser development to acquisition of syntactic knowledge

How does a child's acquisition of a grammar relate to development in their ability to parse and understand sentences in real time?

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Akira Omaki
Dates:
Traditionally, acquisition of syntactic knowledge and the development of sentence comprehension behaviors have been treated as separate disciplines. This paper reviews a growing body of work on the development of incremental sentence comprehension mechanisms, and discusses how a better understanding of the developing parser can shed light on two linking problems that plague language acquisition research. The first linking problem is that children’s behavioral data that are observable to researchers do not provide a transparent window into the developing grammar, as children’s immature linguistic behaviors may reflect the immature parser. The second linking problem is that the input data that researchers investigate may not correspond veridically to the intake data that feed the language acquisition mechanisms, as the developing parser may misanalyze and incorrectly represent the input. Based on reviews of child language comprehension studies that shed light on these two linking problems, it is argued that further research is necessary to closely integrate parser development and acquisition of syntactic knowledge.

Statistical Insensitivity in the Acquisition of Tsez Noun Classes

How do children acquire noun classes? Annie Gagliardi and Jeff Lidz show that children acquiring Tsez are biased to use phonological over semantic cues, despite a statistical asymmetry in the other direction.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Annie Gagliardi
Dates:
This paper examines the acquisition of noun classes in Tsez, looking in particular at the role of noun-internal distributional cues to class. We present a new corpus of child-directed Tsez speech, analyzing it to determine the proportion of nouns that children hear with this predictive information and how often this is heard in conjunction with overt noun class agreement information. Additionally we present an elicited production experiment that uncovers asymmetries in the classification of nouns with predictive features in the corpus and by children and adults. We show that children use noun-internal distributional information as a cue to noun class out of proportion with its reliability. Instead, children are biased to use phonological over semantic information, despite a statistical asymmetry in the other direction. We end with a discussion of where such a bias could come from.

Is she patting Katie? Constraints on pronominal reference in 30-month olds

Preferential looking studies show that, already at 30 months, children's understanding of pronouns in "Katie patted herself" and "She patted Katie" are already adult-like.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Anastacia Conroy, Cynthia Lukyanenko,
Dates:
In this study we investigate young children’s knowledge of syntactic constraints on noun phrasereference, by testing 30-month-olds’ interpretation of two types of transitive sentences. In a preferential looking task, we find that children prefer different interpretations for transitive sentences whose object NP is a name (e.g., She’s patting Katie) as compared with those whose object NP is a reflexive pronoun (e.g., She’s patting herself). They map the former onto an other- directed event (one girl patting another) and the latter onto a self-directed event (one girl patting her own head). These preferences are carried by high-vocabulary children in the sample, and suggest that 30-month-olds have begun to distinguish between different types of transitive sentences. Children’s adult-like interpretations are consistent with adherence to Principles A and C of Binding Theory, and suggest that further research using the preferential looking procedure to investigate young children’s knowledge of syntactic constraints may be fruitful.

Read More about Is she patting Katie? Constraints on pronominal reference in 30-month olds

Is she patting Katie? Constraints on pronominal reference in 30-month-olds

Preferential looking studies show that, already at 30 months, children's understanding of pronouns in "Katie patted herself" and "She patted Katie" are already adult-like.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Cynthia Lukyanenko,
Dates:
In this study we investigate young children’s knowledge of syntactic constraints on Noun Phrase reference by testing 30-month-olds’ interpretation of two types of transitive sentences. In a preferential looking task, we find that children prefer different interpretations for transitive sentences whose object NP is a name (e.g., She’s patting Katie) as compared with those whose object NP is a reflexive pronoun (e.g., She’s patting herself). They map the former onto an other-directed event (one girl patting another) and the latter onto a self-directed event (one girl patting her own head). These preferences are carried by high-vocabulary children in the sample and suggest that 30-month-olds have begun to distinguish among different types of transitive sentences. Children’s adult-like interpretations are consistent with adherence to Principles A and C of Binding Theory and suggest that further research using the preferential looking procedure to investigate young children’s knowledge of syntactic constraints may be fruitful.

Read More about Is she patting Katie? Constraints on pronominal reference in 30-month-olds

Parameters in Language Acquisition

"Parameters" are abstract features of grammar that govern many different observable structures and may vary across languages. Lisa Pearl and Jeff Lidz explore how this notion is used in theories of typology and acquisition.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Lisa Pearl
Dates:
"Parameters" are abstract features of grammar that govern many different observable structures and may vary across languages. Lisa Pearl and Jeff Lidz explore how this notion is used in theories of typology and acquisition.

Conservativity and Learnability of Determiners

Tim Hunter and Jeff Lidz find evidence that 4- to 5-year olds expect determiner meanings to be Conservative

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter
Dates:
A striking cross-linguistic generalization about the semantics of determiners is that they never express non-conservative relations. To account for this one might hypothesize that the mechanisms underlying human language acquisition are unsuited to non-conservative determiner meanings. We present experimental evidence that 4- and 5-year-olds fail to learn a novel non-conservative determiner but succeed in learning a comparable conservative determiner, consistent with the learnability hypothesis.

Read More about Conservativity and Learnability of Determiners

Selective learning the acquisition of Kannada ditransitives

Even young children have a highly abstract representation of ditransitive syntax.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Joshua Viau
Dates:
In this paper we bring evidence from language acquisition to bear on the debate over the relative abstractness of children’s grammatical knowledge. We first identify one aspect of syntactic representation that exhibits a range of syntactic, morphological and semantic consequences both within and across languages, namely the hierarchical structure of ditransitive verb phrases. While the semantic consequences of this structure are parallel in English, Kannada, and Spanish, the word order and morphological reflexes of this structure diverge. Next we demonstrate that children learning Kannada have command of the relation between morphological form and semantic interpretation in ditransitives with respect to quantifier-variable binding. Finally, we offer a proposal on how a selective learning mechanism might succeed in identifying the appropriate structures in this domain despite the variability in surface expression.

Competence, Performance and the Locality of Quantifier Raising: Evidence from 4-year-old Children

Can quantifiers be interpreted outside of their own clause? Do the observed contraints have a grammatical source? Kristen Syrett and Jeff Lidz revisit these questions with experimental studies on the interpretation of ACD by both adults and children.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Kristen Syrett
Dates:
We revisit the purported locality constraint of Quantifier Raising (QR) by investigating children's and adults' interpretation of ACD sentences, where the interpretation depends on the landing site targeted by QR out of an embedded clause. When ACD is embedded in a nonfinite clause, 4-year-old children and adults access the embedded and matrix interpretations. When ACD is embedded in a finite clause, and the matrix interpretation is generally believed to be ungrammatical, children and even some adults access both readings. This set of findings allows for the possibility that the source of QR's reputed locality constraint may instead be extragrammatical and provides insight into the development of the human sentence parser.

Language Learning and Language Universals

How do patterns in the environment interact with our innate capacities to produce our first languages?

Linguistics

Contributor(s): Jeffrey Lidz
Dates:
This paper explores the role of learning in generative grammar, highlighting interactions between distributional patterns in the environment and the innate structure of the language faculty. Reviewing three case studies, it is shown how learners use their language faculties to leverage the environment, making inferences from distributions to grammars that would not be licensed in the absence of a richly structured hypothesis space.

Restrictions on the Meaning of Determiners: Typological Generalisations and Learnability

Are nonconservative meanings for determiners unlearnable? And what about a determiner that means 'less than half'?

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter, Alexis Wellwood, Anastasia Conroy
Dates:
In this paper we examine the relationship between learnability and typology in the area of determiner meanings. We begin with two generalisations about the meanings that determiners of the world’s languages are found to have, and investigate the learnability of fictional determiners with unattested meanings. If participants in our experiments fail to learn such determiners, then this would suggest that they are unattested because they are unlearnable. If, on the other hand, participants are able to learn the determiners in question, then some other explanation for their absence in the languages of the world is necessary.

Priming of abstract logical representations in 4-year-olds

"Every horse did not jump over the fence." Preschoolers tend to hear this as meaning that none did. But the preference is not grammatical, as it can be reduced either by priming or changes to the context.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Joshua Viau, Julien Musolino
Dates:

Though preschoolers in certain experimental contexts strongly prefer to interpret ambiguous sentences containing quantified NPs and negation on the basis of surface syntax (e.g., Musolino’s (1998) “observation of isomorphism”), contextual manipulations can lead to more adult-like behavior. But is isomorphism a purely pragmatic phenomenon, as recently proposed? In Experiment 1, we begin by isolating the contextual factor responsible for children’s improvement in Musolino and Lidz (2006). We then demonstrate in Experiment 2 that this factor can be used to prime inverse scope interpretations. To remove pragmatics from the equation altogether, we show in Experiment 3 that the same effect can be achieved via semantic priming. Our results represent the first clear evidence for priming of the abstract logico-syntactic structures underlying these interpretations and, thus, highlight the importance of language processing alongside pragmatic reasoning during children’s linguistic development.

When Domain General Learning Succeeds and When it Fails

Learning how to interpret anaphoric "one" requires domain-specific mechanisms.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Lisa Pearl
Dates:
We identify three components of any learning theory: the representations, the learner’s data intake, and the learning algorithm. With these in mind, we model the acquisition of the English anaphoric pronoun one in order to identify necessary constraints for successful acquisition, and the nature of those constraints. Whereas previous modeling efforts have succeeded by using a domain-general learning algorithm that implicitly restricts the data intake to be a subset of the input, we show that the same kind of domain-general learning algorithm fails when it does not restrict the data intake. We argue that the necessary data intake restrictions are domain-specific in nature. Thus, while a domain-general algorithm can be quite powerful, a successful learner must also rely on domain-specific learning mechanisms when learning anaphoric one.