Jeffrey Lidz
Professor and Chair, Linguistics
Member, Maryland Language Science Center
jlidz@umd.edu
1413 Marie Mount Hall
Get Directions
Research Expertise
Language Acquisition
Psycholinguistics
Syntax
Publications
Thematic Content, Not Number Matching, Drives Syntactic Bootstrapping
Toddlers do not expect the structure of a sentence to match the structure of the concept under which they view its referent.
Children use correlations between the syntax of a clause and the meaning of its predicate to draw inferences about word meanings. On one proposal, these inferences are underwritten by a structural similarity between syntactic and semantic representations: learners expect that the number of clause arguments exactly matches the number of participant roles in the event concept under which its referent is viewed. We argue against this proposal, and in favor of a theory rooted in syntactic and semantic contents – in mappings from syntactic positions to thematic relations. We (i) provide evidence that infants view certain scenes under a concept with three participant relations (a girl taking a truck from a boy), and (ii) show that toddlers do not expect these representations to align numerically with clauses used to describe those scenes: they readily accept two-argument descriptions (“she pimmed the truck!”). This argues against syntactic bootstrapping theories underwritten by mappings between structural features of syntactic and semantic representations. Instead, our findings support bootstrapping based on grammatical and thematic content. Children’s earliest inferences may rely on the assumption that the syntactic asymmetry between subject and object correlates with a difference in how their referents relate to the event described by the sentence.
Read More about Thematic Content, Not Number Matching, Drives Syntactic Bootstrapping
Visual perception supports 4-place event representations: A case study of TRADING
Can adults visually represent a trading as a single event with four participants?
Events of social exchange, such as givings and tradings, are uniquely prevalent in human societies and cognitively privileged even at early stages of development. Such events may be represented as having 3 or even 4 participants. To do so in visual working memory would be at the limit of the system, which throughout development can track only 3 to 4 items. Using a case study of trading, we ask (i) whether adults can track all four participants in a trading scene, and (ii) whether they do so by chunking the scene into two giving events, each with 3 participants, to avoid placing the visual working memory system at its limit. We find that adults represent this scene under a 4-participant concept, and do not view the trade as two sequential giving events. We discuss further implications for event perception and verb learning in development.
Read More about Visual perception supports 4-place event representations: A case study of TRADING
Individuals versus ensembles and "each" versus "every": Linguistic framing affects performance in a change detection task
More evidence that "every" but not "each" evokes ensemble representations.
Though each and every are both distributive universal quantifiers, a common theme in linguistic and psycholinguistic investigations into them has been that each is somehow more individualistic than every. We offer a novel explanation for this generalization: each has a first-order meaning which serves as an internalized instruction to cognition to build a thought that calls for representing the (restricted) domain as a series of individuals; by contrast, every has a second-order meaning which serves as an instruction to build a thought that calls for grouping the domain. In support of this view, we show that these distinct meanings invite the use of distinct verification strategies, using a novel paradigm. In two experiments, participants who had been asked to verify sentences like each/every circle is green were subsequently given a change detection task. Those who evaluated each-sentences were better able to detect the change, suggesting they encoded the individual circles' colors to a greater degree. Taken together with past work demonstrating that participants recall group properties after evaluating sentences with every better than after evaluating sentences with each, these results support the hypothesis that each and every call for treating the individuals that constitute their domain differently: as independent individuals (each) or as members of an ensemble collection (every). We situate our findings within a conception of linguistic meanings as instructions for thought building, on which the format of the resulting thought has consequences for how meanings interface with non-linguistic cognition.
Psycholinguistic evidence for restricted quantification
Determiners express restricted quantifiers and not relations between sets.
Quantificational determiners are often said to be devices for expressing relations. For example, the meaning of every is standardly described as the inclusion relation, with a sentence like every frog is green meaning roughly that the green things include the frogs. Here, we consider an older, non-relational alternative: determiners are tools for creating restricted quantifiers. On this view, determiners specify how many elements of a restricted domain (e.g., the frogs) satisfy a given condition (e.g., being green). One important difference concerns how the determiner treats its two grammatical arguments. On the relational view, the arguments are on a logical par as independent terms that specify the two relata. But on the restricted view, the arguments play distinct logical roles: specifying the limited domain versus supplying an additional condition on domain entities. We present psycholinguistic evidence suggesting that the restricted view better describes what speakers know when they know the meaning of a determiner. In particular, we find that when asked to evaluate sentences of the form every F is G, participants mentally group the Fs but not the Gs. Moreover, participants forego representing the group defined by the intersection of F and G. This tells against the idea that speakers understand every F is G as implying that the Fs bear relation (e.g., inclusion) to a second group.
Read More about Psycholinguistic evidence for restricted quantification
Parser-Grammar Transparency and the Development of Syntactic Dependencies
Learning a grammar is sufficient for learning to parse.
A fundamental question in psycholinguistics concerns how grammatical structure contributes to real-time sentence parsing and understanding. While many argue that grammatical structure is only loosely related to on-line parsing, others hold the view that the two are tightly linked. Here, I use the incremental growth of grammatical structure in developmental time to demonstrate that as new grammatical knowledge becomes available to children, they use that knowledge in their incremental parsing decisions. Given the tight link between the acquisition of new knowledge and the use of that knowledge in recognizing sentence structure, I argue in favor of a tight link between grammatical structure and parsing mechanics.
Read More about Parser-Grammar Transparency and the Development of Syntactic Dependencies
Lexicalization in the developing parser
Children make syntactic predictions based on the syntactic distributions of specific verbs, but do not assume that the patterns can be generalized.
We use children's noun learning as a probe into the nature of their syntactic prediction mechanism and the statistical knowledge on which that prediction mechanism is based. We focus on verb-based predictions, considering two possibilities: children's syntactic predictions might rely on distributional knowledge about specific verbs–i.e. they might be lexicalized–or they might rely on distributional knowledge that is general to all verbs. In an intermodal preferential looking experiment, we establish that verb-based predictions are lexicalized: children encode the syntactic distributions of specific verbs and use those distributions to make predictions, but they do not assume that these can be assumed of verbs in general.
Children's use of syntax in word learning
How children use syntax as evidence for word meaning.
This chapter investigates the role that syntax plays in guiding the acquisition of word meaning. It reviews data that reveal how children can use the syntactic distribution of a word as evidence for its meaning and discusses the principles of grammar that license such inferences. We delineate the role of thematic linking generalizations in the acquisition of action verbs, arguing that children use specific links between subject and agent and between object and patient to guide initial verb learning. In the domain of attitude verbs, we show that children’s knowledge of abstract links between subclasses of attitude verbs and their syntactic distribution enable learners to identify the meanings of their initial attitude verbs, such as think and want. Finally, we show that syntactic bootstrapping effects are not limited to verb learning but extend across the lexicon.
Syntactic bootstrapping attitude verbs despite impoverished morphosyntax
Even when acquiring Chinese, children assign belief semantics to verbs whose objects morphosyntactically resemble declarative main clauses, and desire semantics to others.
Attitude verbs like think and want describe mental states (belief and desire) that lack reliable physical correlates that could help children learn their meanings. Nevertheless, children succeed in doing so. For this reason, attitude verbs have been a parade case for syntactic bootstrapping. We assess a recent syntactic bootstrapping hypothesis, in which children assign belief semantics to verbs whose complement clauses morphosyntactically resemble the declarative main clauses of their language, while assigning desire semantics to verbs whose complement clauses do not. This hypothesis, building on the cross-linguistic generalization that belief complements have the morphosyntactic hallmarks of declarative main clauses, has been elaborated for languages with relatively rich morphosyntax. This article looks at Mandarin Chinese, whose null arguments and impoverished morphology mean that the differences necessary for syntactic bootstrapping might be much harder to detect. Our corpus analysis, however, shows that Mandarin belief complements have the profile of declarative main clauses, while desire complements do not. We also show that a computational implementation of this hypothesis can learn the right semantic contrasts between Mandarin and English belief and desire verbs, using morphosyntactic features in child-ambient speech. These results provide novel cross-linguistic support for this syntactic bootstrapping hypothesis.
Read More about Syntactic bootstrapping attitude verbs despite impoverished morphosyntax
On the Acquisition of Attitude Verbs
On the acquisition of attitude verbs.
Attitude verbs, such as think, want, and know, describe internal mental states that leave few cues as to their meanings in the physical world. Consequently, their acquisition requires learners to draw from indirect evidence stemming from the linguistic and conversational contexts in which they occur. This provides us a unique opportunity to probe the linguistic and cognitive abilities that children deploy in acquiring these words. Through a few case studies, we show how children make use of syntactic and pragmatic cues to figure out attitude verb meanings and how their successes, and even their mistakes, reveal remarkable conceptual, linguistic, and pragmatic sophistication.
The Power of Ignoring: Filtering Input for Argument Structure Acquisition
How to avoid learning from misleading data by identifying a filter without knowing what to filter.
Learning in any domain depends on how the data for learning are represented. In the domain of language acquisition, children’s representations of the speech they hear determine what generalizations they can draw about their target grammar. But these input representations change over development asa function of children’s developing linguistic knowledge, and may be incomplete or inaccurate when children lack the knowledge to parse their input veridically. How does learning succeed in the face of potentially misleading data? We address this issue using the case study of “non-basic” clauses inverb learning. A young infant hearing What did Amy fix? might not recognize that what stands in for the direct object of fix, and might think that fix is occurring without a direct object. We follow a previous proposal that children might filter nonbasic clauses out of the data for learning verb argument structure, but offer a new approach. Instead of assuming that children identify the data to filter ina dvance, we demonstrate computationally that it is possible for learners to infer a filter on their input without knowing which clauses are nonbasic. We instantiate a learner that considers the possibility that it misparses some of the sentences it hears, and learns to filter out those parsing errors in order to correctly infer transitivity for the majority of 50 frequent verbs in child-directed speech. Our learner offers a novel solution to the problem of learning from immature input representations: Learners maybe able to avoid drawing faulty inferences from misleading data by identifying a filter on their input,without knowing in advance what needs to be filtered.
Read More about The Power of Ignoring: Filtering Input for Argument Structure Acquisition
Eighteen-month-old infants represent nonlocal syntactic dependencies
Evidence that 18-month olds already represent filler-gap dependencies.
The human ability to produce and understand an indefinite number of sentences is driven by syntax, a cognitive system that can combine a finite number of primitive linguistic elements to build arbitrarily complex expressions. The expressive power of syntax comes in part from its ability to encode potentially unbounded dependencies over abstract structural configurations. How does such a system develop in human minds? We show that 18-mo-old infants are capable of representing abstract nonlocal dependencies, suggesting that a core property of syntax emerges early in development. Our test case is English wh-questions, in which a fronted wh-phrase can act as the argument of a verb at a distance (e.g., What did the chef burn?). Whereas prior work has focused on infants’ interpretations of these questions, we introduce a test to probe their underlying syntactic representations, independent of meaning. We ask when infants know that an object wh-phrase and a local object of a verb cannot co-occur because they both express the same argument relation (e.g., *What did the chef burn the pizza). We find that 1) 18 mo olds demonstrate awareness of this complementary distribution pattern and thus represent the nonlocal grammatical dependency between the wh-phrase and the verb, but 2) younger infants do not. These results suggest that the second year of life is a period of active syntactic development, during which the computational capacities for representing nonlocal syntactic dependencies become evident.
Read More about Eighteen-month-old infants represent nonlocal syntactic dependencies
The mental representation of universal quantifers
On the psychological representations that give the meanings of "every" and "each".
A sentence like every circle is blue might be understood in terms of individuals and their properties (e.g., for each thing that is a circle, it is blue) or in terms of a relation between groups (e.g., the blue things include the circles). Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with each, every, or all combined with a predicate (e.g., big dot). We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions.
Read More about The mental representation of universal quantifers
Linguistic meanings as cognitive instructions
"More" and "most" do not encode the same sorts of comparison.
Natural languages like English connect pronunciations with meanings. Linguistic pronunciations can be described in ways that relate them to our motor system (e.g., to the movement of our lips and tongue). But how do linguistic meanings relate to our nonlinguistic cognitive systems? As a case study, we defend an explicit proposal about the meaning of most by comparing it to the closely related more: whereas more expresses a comparison between two independent subsets, most expresses a subset–superset comparison. Six experiments with adults and children demonstrate that these subtle differences between their meanings influence how participants organize and interrogate their visual world. In otherwise identical situations, changing the word from most to more affects preferences for picture–sentence matching (experiments 1–2), scene creation (experiments 3–4), memory for visual features (experiment 5), and accuracy on speeded truth judgments (experiment 6). These effects support the idea that the meanings of more and most are mental representations that provide detailed instructions to conceptual systems.
Read More about Linguistic meanings as cognitive instructions
Japanese children's knowledge of the locality of "zibun" and "kare"
Initial errors in the acquisition of the Japanese local- or long-distance anaphor "zibun."
Although the Japanese reflexive zibun can be bound both locally and across clause boundaries, the third-person pronoun kare cannot take a local antecedent. These are properties that children need to learn about their language, but we show that the direct evidence of the binding possibilities of zibun is sparse and the evidence of kare is absent in speech to children, leading us to ask about children’s knowledge. We show that children, unlike adults, incorrectly reject the long-distance antecedent for zibun, and while being able to access this antecedent for a non-local pronoun kare, they consistently reject the local antecedent for this pronoun. These results suggest that children’s lack of matrix readings for zibun is not due to their understanding of discourse context but the properties of their language understanding.
Read More about Japanese children's knowledge of the locality of "zibun" and "kare"
Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis
Experimental evidence supports an analysis of Null Object constructions in Korean as instances of object ellipsis.
Null object (NO) constructions in Korean and Japanese have receiveddifferent accounts: as (a) argument ellipsis (Oku 1998, S. Kim 1999, Saito 2007, Sakamoto 2015), (b) VP-ellipsis after verb raising (Otani and Whitman 1991, Funakoshi 2016), or (c) instances of base-generated pro (Park 1997, Hoji 1998, 2003). We report results from two experiments supporting the argument ellipsis analysis for Korean. Experiment 1 builds on K.-M. Kim and Han’s (2016) finding of interspeaker variation in whether the pronoun ku can be bound by a quantifier. Results showed that a speaker’s acceptance of quantifier-bound ku positively correlates with acceptance of sloppy readings in NO sentences. We argue that an ellipsis account, in which the NO site contains internal structure hosting the pronoun, accounts for this correlation. Experiment 2, testing the recovery of adverbials in NO sentences, showed that only the object (not the adverb) can be recovered in the NO site, excluding the possibility of VP-ellipsis. Taken together, our findings suggest that NOs result from argument ellipsis in Korean.
Read More about Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis
Hope for syntactic bootstrapping
Some mental state verbs take a finite clause as their object, while others take an infinitive, and the two groups differ reliably in meaning. Remarkably, children can use this correlation to narrow down the meaning of an unfamiliar verb.
We explore children’s use of syntactic distribution in the acquisition of attitude verbs, such as think, want, and hope. Because attitude verbs refer to concepts that are opaque to observation but have syntactic distributions predictive of semantic properties, we hypothesize that syntax may serve as an important cue to learning their meanings. Using a novel methodology, we replicate previous literature showing an asymmetry between acquisition of think and want, and we additionally demonstrate that interpretation of a less frequent attitude verb, hope, patterns with type of syntactic complement. This supports the view that children treat syntactic frame as informative about an attitude verb’s meaning
Filler-gap dependency comprehension at 15 months: The role of vocabulary
New evidence from preferential looking suggests that 15 month olds can correctly understand wh-questions and relative clauses under certain experimental conditions, but perhaps only by noticing that a verb is missing an expected dependent.
15-month-olds behave as if they comprehend filler-gap dependencies such as wh-questions and relative clauses. On one hypothesis, this success does not reflect adult-like representations but rather a “gap-driven” interpretation heuristic based on verb knowledge. Infants who know that feed is transitive may notice that a predicted direct object is missing in Which monkey did the frog feed __? and then search the display for the animal that got fed. This gap-driven account predicts that 15-month-olds will perform accurately only if they know enough verbs to deploy this interpretation heuristic; therefore, performance should depend on vocabulary. We test this prediction in a preferential looking task and find corroborating evidence: Only 15-month-olds with higher vocabulary behave as if they comprehend wh-questions and relative clauses. This result reproduces the previous finding that 15-month-olds can identify the right answer for wh-questions and relative clauses under certain experimental contexts, and is moreover consistent with the gap-driven heuristic account for this behavior.
Read More about Filler-gap dependency comprehension at 15 months: The role of vocabulary
Learning, memory and syntactic bootstrapping: A meditation
Do children learning words rely on memories for where they have heard the word before? Jeff Lidz argues memory of syntactic context plays a larger role than memory for referential context.
Read More about Learning, memory and syntactic bootstrapping: A meditation
Prosody and Function Words Cue the Acquisition of Word Meanings in 18-Month-Old Infants
18-month-old infants use prosody and function words to recover the syntactic structure of a sentence, which in turn constrains the possible meanings of novel words the sentence contains.
Language acquisition presents a formidable task for infants, for whom word learning is a crucial yet challenging step. Syntax (the rules for combining words into sentences) has been robustly shown to be a cue to word meaning. But how can infants access syntactic information when they are still acquiring the meanings of words? We investigated the contribution of two cues that may help infants break into the syntax and give a boost to their lexical acquisition: phrasal prosody (speech melody) and function words, both of which are accessible early in life and correlate with syntactic structure in the world’s languages. We show that 18-month-old infants use prosody and function words to recover sentences’ syntactic structure, which in turn constrains the possible meanings of novel words: Participants (N = 48 in each of two experiments) interpreted a novel word as referring to either an object or an action, given its position within the prosodic-syntactic structure of sentences.
The importance of input representations
Learning from data is not incompatible with approaches that attribute rich initial linguistic knowledge to the learner. On the contrary, such approaches must still account for how knowledge guides learners in using their data to infer a grammar.
Language learners use the data in their environment in order to infer the grammatical system that produced that data. Yang (2018) makes the important point that this process requires integrating learners’ experiences with their current linguistic knowledge. A complete theory of language acquisition must explain how learners leverage their developing knowledge in order to draw further inferences on the basis of new data. As Yang and others have argued, the fact that input plays a role in learning is orthogonal to the question of whether language acquisition is primarily knowledge-driven or data-driven (J. A. Fodor, 1966; Lidz & Gagliardi, 2015; Lightfoot, 1991; Wexler & Culicover, 1980). Learning from data is not incompatible with approaches that attribute rich initial linguistic knowledge to the learner. On the contrary, such approaches must still account for how knowledge guides learners in using their data to infer a grammar.
The explanatory power of linguistic theory
Jeff Lidz details evidence for the Predicate Internal Subject Hypothesis, and shows how its abstractness supports the "considerable sophistication" that the Chomskyan tradition imputes to the child learner.
The scope of children’s scope: Representation, parsing and learning
What do young children know about quantifier scope?
Read More about The scope of children’s scope: Representation, parsing and learning
Similarity-based interference and the acquisition of adjunct control
Kids sometimes make errors in interpreting the understood subject of adjunct predicate, like "before leaving." Julie Gerard argues that these errors may result, not from a non-adultlike grammar, but from mistakes in sentence processing.
Read More about Similarity-based interference and the acquisition of adjunct control
The role of incremental parsing in syntactically conditioned word learning
The girl is tapping with the tig. If you don't know what "tig" means, you'll look to what the girl is using to tap. And so will even very young children.
Read More about The role of incremental parsing in syntactically conditioned word learning
Verb learning in 14- and 18-month-old English-learning infants
Ordinarily, verbs in English label events while nouns do not. Angela He and Jeff Lidz show that even 18-month-olds can use this correlation to infer the meanings of novel words, given the understanding that "is _ ing" is a context for verbs.
Read More about Verb learning in 14- and 18-month-old English-learning infants
Language Acquisition
A handbook chapter on first language acquisition, aimed at the independent contributions of experience, domain-specific biases, priorknowledge and extralinguistic cognition in shaping how a grammar grows inside the mind of a child.
On how verification tasks are related to verification procedures: A reply to Kotek et al.
How do we mentally represent the meaning of "most"? Here Tim Hunter clarifies the goals of Jeff Lidz and Paul Pietroski's project to answer this question, in respsonse to misunderstandings.
The Oxford Handbook of Developmental Linguistics
An esssential compendium of contemporary research in language acquisition.
NPI licensing and beyond: Children's knowledge of the semantics of "any"
Visitor Lyn Tieu and mentor Jeff Lidz investigate preschooler's understanding of negative polarity items like "any".
Read More about NPI licensing and beyond: Children's knowledge of the semantics of "any"
Discontinuous Development in the Acquisition of Filler-Gap Dependencies: Evidence from 15- and 20-Month-Olds
15-month-olds are able to understand relative clauses and wh-questions; but not by way of correctly representing their grammar.
Endogenous sources of variation in language acquisition
Jeff Lidz and collaborators investigate inter-speaker variation in the grammar of quantifier scope in Korean.
Read More about Endogenous sources of variation in language acquisition
Syntactic and lexical inference in the acquisition of novel superlatives
Even four year olds are biased to think that determiners express relations between quantities, but lack the same bias for adjectives. How do they arrive at this bias?
Read More about Syntactic and lexical inference in the acquisition of novel superlatives
Expanding our Reach and Theirs: When Linguists go to High School
A report on outreach to local schools by the community of language scientists at UMCP.
Read More about Expanding our Reach and Theirs: When Linguists go to High School
How Nature Meets Nurture: Universal Grammar and Statistical Learning
Children acquire grammars on the basis of statistical information, interpreted through a system of linguistic representation that is substantially innate. Jeff Lidz and Annie Gagliardi propose a model of the process.
Read More about How Nature Meets Nurture: Universal Grammar and Statistical Learning
Linking parser development to acquisition of syntactic knowledge
How does a child's acquisition of a grammar relate to development in their ability to parse and understand sentences in real time?
Statistical Insensitivity in the Acquisition of Tsez Noun Classes
How do children acquire noun classes? Annie Gagliardi and Jeff Lidz show that children acquiring Tsez are biased to use phonological over semantic cues, despite a statistical asymmetry in the other direction.
Is she patting Katie? Constraints on pronominal reference in 30-month olds
Preferential looking studies show that, already at 30 months, children's understanding of pronouns in "Katie patted herself" and "She patted Katie" are already adult-like.
Read More about Is she patting Katie? Constraints on pronominal reference in 30-month olds
Is she patting Katie? Constraints on pronominal reference in 30-month-olds
Preferential looking studies show that, already at 30 months, children's understanding of pronouns in "Katie patted herself" and "She patted Katie" are already adult-like.
Read More about Is she patting Katie? Constraints on pronominal reference in 30-month-olds
Parameters in Language Acquisition
"Parameters" are abstract features of grammar that govern many different observable structures and may vary across languages. Lisa Pearl and Jeff Lidz explore how this notion is used in theories of typology and acquisition.
Conservativity and Learnability of Determiners
Tim Hunter and Jeff Lidz find evidence that 4- to 5-year olds expect determiner meanings to be Conservative
Read More about Conservativity and Learnability of Determiners
Selective learning the acquisition of Kannada ditransitives
Even young children have a highly abstract representation of ditransitive syntax.
Competence, Performance and the Locality of Quantifier Raising: Evidence from 4-year-old Children
Can quantifiers be interpreted outside of their own clause? Do the observed contraints have a grammatical source? Kristen Syrett and Jeff Lidz revisit these questions with experimental studies on the interpretation of ACD by both adults and children.
Language Learning and Language Universals
How do patterns in the environment interact with our innate capacities to produce our first languages?
Restrictions on the Meaning of Determiners: Typological Generalisations and Learnability
Are nonconservative meanings for determiners unlearnable? And what about a determiner that means 'less than half'?
Priming of abstract logical representations in 4-year-olds
"Every horse did not jump over the fence." Preschoolers tend to hear this as meaning that none did. But the preference is not grammatical, as it can be reduced either by priming or changes to the context.
Though preschoolers in certain experimental contexts strongly prefer to interpret ambiguous sentences containing quantified NPs and negation on the basis of surface syntax (e.g., Musolino’s (1998) “observation of isomorphism”), contextual manipulations can lead to more adult-like behavior. But is isomorphism a purely pragmatic phenomenon, as recently proposed? In Experiment 1, we begin by isolating the contextual factor responsible for children’s improvement in Musolino and Lidz (2006). We then demonstrate in Experiment 2 that this factor can be used to prime inverse scope interpretations. To remove pragmatics from the equation altogether, we show in Experiment 3 that the same effect can be achieved via semantic priming. Our results represent the first clear evidence for priming of the abstract logico-syntactic structures underlying these interpretations and, thus, highlight the importance of language processing alongside pragmatic reasoning during children’s linguistic development.