Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

A unified account of categorical effects in phonetic perception

A statistical model that explains both the strong categorical effects in perception of consonants, and the very weak effects in perception of vowels.

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s): Yakov Kronrod, Emily Coppess
Dates:
Categorical effects are found across speech sound categories, with the degree of these effects ranging from extremely strong categorical perception in consonants to nearly continuous perception in vowels. We show that both strong and weak categorical effects can be captured by a unified model. We treat speech perception as a statistical inference problem, assuming that listeners use their knowledge of categories as well as the acoustics of the signal to infer the intended productions of the speaker. Simulations show that the model provides close fits to empirical data, unifying past findings of categorical effects in consonants and vowels and capturing differences in the degree of categorical effects through a single parameter.

Read More about A unified account of categorical effects in phonetic perception

On how verification tasks are related to verification procedures: A reply to Kotek et al.

How do we mentally represent the meaning of "most"? Here Tim Hunter clarifies the goals of Jeff Lidz and Paul Pietroski's project to answer this question, in respsonse to misunderstandings.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter, Darko Odic, Alexis Wellwood
Dates:
Kotek et al. (Nat Lang Semant 23: 119–156, 2015) argue on the basis of novel experimental evidence that sentences like ‘Most of the dots are blue’ are ambiguous, i.e. have two distinct truth conditions. Kotek et al. furthermore suggest that when their results are taken together with those of earlier work by Lidz et al. (Nat Lang Semant 19: 227–256, 2011), the overall picture that emerges casts doubt on the conclusions that Lidz et al. drew from their earlier results. We disagree with this characterization of the relationship between the two studies. Our main aim in this reply is to clarify the relationship as we see it. In our view, Kotek et al.’s central claims are simply logically independent of those of Lidz et al.: the former concern which truth condition(s) a certain kind of sentence has, while the latter concern the procedures that speakers choose for the purposes of determining whether a particular truth condition is satisfied in various scenes. The appearance of a conflict between the two studies stems from inattention to the distinction between questions about truth conditions and questions about verification procedures.

Read More about On how verification tasks are related to verification procedures: A reply to Kotek et al.

Infant-directed speech is consistent with teaching

Why do we speak differently to infants than to adults? To help answer this question, Naomi Feldman offers a formal theory of phonetic teaching and learning.

Linguistics

Contributor(s): Naomi Feldman
Non-ARHU Contributor(s): Baxter Eaves Jr., Thomas Griffiths, Patrick Shafto
Dates:
Infant-directed speech (IDS) has distinctive properties that differ from adult-directed speech (ADS). Why it has these properties -- and whether they are intended to facilitate language learning -- is matter of contention. We argue that much of this disagreement stems from lack of a formal, guiding theory of how phonetic categories should best be taught to infant-like learners. In the absence of such a theory, researchers have relied on intuitions about learning to guide the argument. We use a formal theory of teaching, validated through experiments in other domains, as the basis for a detailed analysis of whether IDS is well-designed for teaching phonetic categories. Using the theory, we generate ideal data for teaching phonetic categories in English. We qualitatively compare the simulated teaching data with human IDS, finding that the teaching data exhibit many features of IDS, including some that have been taken as evidence IDS is not for teaching. The simulated data reveal potential pitfalls for experimentalists exploring the role of IDS in language learning. Focusing on different formants and phoneme sets leads to different conclusions, and the benefit of the teaching data to learners is not apparent until a sufficient number of examples have been provided. Finally, we investigate transfer of IDS to learning ADS. The teaching data improves classification of ADS data, but only for the learner they were generated to teach; not universally across all classes of learner. This research offers a theoretically-grounded framework that empowers experimentalists to systematically evaluate whether IDS is for teaching.

A Direct Comparison of N400 Effects of Predictability and Incongruity in Adjective-Noun Combination

The N400 is modulated both by association and by predictability: but independently? Only slightly, show Ellen and her collaborators, suggesting that its senstivity to both does not come just from trouble integrating a word with its prior context.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s): Anna Namyst, Allison Fogel, Tania Delgado
Dates:
Previous work has shown that the N400 ERP component is elicited by all words, whether presented in isolation or in structured contexts, and that its amplitude is modulated by semantic association and contextual predictability. What is less clear is the extent to which the N400 response is modulated by semantic incongruity when predictability is held constant. In the current study we examine N400 modulation associated with independent manipulations of predictability and congruity in an adjective-noun paradigm that allows us to precisely control predictability through corpus counts. Our results demonstrate small N400 effects of semantic congruity (yellow bag vs. innocent bag), and much more robust N400 effects of predictability (runny nose vs. dainty nose) under the same conditions. These data argue against unitary N400 theories according to which N400 effects of both predictability and incongruity reflect a common process such as degree of integration difficulty, as large N400 effects of predictability were observed in the absence of large N400 effects of incongruity. However, the data are consistent with some versions of unitary ‘facilitated access’ N400 theories, as well as multiple-generator accounts according to which the N400 can be independently modulated by facilitated conceptual/lexical access (as with predictability) and integration diffculty (as with incongruity, perhaps to a greater extent in full sentential contexts).

Read More about A Direct Comparison of N400 Effects of Predictability and Incongruity in Adjective-Noun Combination

Negative polarity illusions and the format of hierarchical encodings in memory

"The bill that no senator endorsed will ever become a law." This is ungrammatical, but may initially seem acceptable, a 'grammatical illusion.' Here Dan Parker and Colin Phillips show how this particular type of illusion depends on timing.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Dan Parker
Dates:
Linguistic illusions have provided valuable insights into how we mentally navigate complex representations in memory during language comprehension. Two notable cases involve illusory licensing of agreement and negative polarity items (NPIs), where comprehenders fleetingly accept sentences with unlicensed agreement or an unlicensed NPI, but judge those same sentences as unacceptable after more reflection. Existing accounts have argued that illusions are a consequence of faulty memory access processes, and make the additional assumption that the encoding of the sentence remains fixed over time. This paper challenges the predictions made by these accounts, which assume that illusions should generalize to a broader set of structural environments and a wider range of syntactic and semantic phenomena. We show across seven reading-time and acceptability judgment experiments that NPI illusions can be reliably switched “on” and “off”, depending on the amount of time from when the potential licensor is processed until the NPI is encountered. But we also find that the same profile does not extend to agreement illusions. This contrast suggests that the mechanisms responsible for switching the NPI illusion on and off are not shared across all illusions. We argue that the contrast reflects changes over time in the encoding of the semantic/pragmatic representations that can license NPIs. Just as optical illusions have been informative about the visual system, selective linguistic illusions are informative not only about the nature of the access mechanisms, but also about the nature of the encoding mechanisms.

Read More about Negative polarity illusions and the format of hierarchical encodings in memory

The Oxford Handbook of Developmental Linguistics

An esssential compendium of contemporary research in language acquisition.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): William Snyder, Joe Pater
Dates:
Publisher: Oxford University Press
1. Introduction, Jeffrey Lidz, William Snyder, and Joe Pater#### Part I: The Acquisition of Sound Systems 2. The Acquisition of Phonological Inventories, Ewan Dunbar and William Idsardi 3. Phonotactics and Syllable Structure in Infant Speech Perception, Tania S. Zamuner and Viktor Kharlamov 4. Phonological Processes in Children's Production: Convergence with and Divergence from Adult Grammars, Heather Goad 5. Prosodic Phenomena: Stress, Tone, and Intonation, Mitsuhiko Ota ####Part II: The Acquisition of Morphology 6. Compound Word Formation, William Snyder 7. Morpho-phonological Acquisition, Anne-Michelle Tessier 8. Processing Continuous Speech in Infancy: From Major Prosodic Units to Isolated Word Forms, Louise Goyet, Severine Millotte, Anne Christophe, and Thierry Nazzi ####Part III: The Acquisition of Syntax 9. Argument Structure, Joshua Viau and Ann Bunger 10. Voice Alternations (Active, Passive, Middle), M. Teresa Guasti 11. On the Acquisition of Prepositions and Particles, Koji Sugisaki 12. A-Movement in Language Development, Misha Becker and Susannah Kirby 13. The Acquisition of Complements, Jill de Villiers and Tom Roeper 14. Acquisition of Questions, Rosalind Thornton 15. Root Infinitives in Child Language and the Structure of the Clause, John Grinstead 16. Mood Alternations, Kamil Ud Deen 17. Null Subjects, Virginia Valian 18. Case and Agreement, Paul Hagstrom 19. Acquiring Possessives, Theo Marinis ####Part IV: The Acquisition of Semantics 20. Acquisition of Comparative and Degree Constructions, Kristen Syrett 21. Quantification in Child Language, Jeffrey Lidz 22. The Acquisition of Binding and Coreference, Sergio Baauw 23. Logical Connectives, Takuya Goro 24. The Expression of Genericity in Child Language, Ana T. Perez-Laroux 25. Lexical and Grammatical Aspect, Angeliek van Hout 26. Scalar Implicature, Anna Papafragou and Dimitrios Skordos ####Part V: Theories of Learning 27. Computational Theories of Learning and Developmental Psycholinguistics, Jeffrey Heinz 28. Statistical Learning, Inductive Bias, and Bayesian Inference in Language Acquisition, Lisa Pearl and Sharon Goldwater 29. Computational Approaches to Parameter Setting in Generative Linguistics, William Gregory Sakas 30. Learning with Violable Constraints, Gaja Jarosz ####Part VI: Atypical Populations 31. Language Development in Children with Developmental Disorders, Andrea Zukowski 32. The Genetics of Spoken Language, Jennifer Ganger 33. Phonological Disorders: Theoretical and Experimental Findings, Daniel A. Dinnsen, Jessica A. Barlow, and Judith A. Gierut

Direction matters: Event-related brain potentials reflect extra processing costs in switching from the dominant to the less dominant language

An ERP study of language-switching in Mandarin-Taiwanese bilinguals. Are the costs of switching modulated by the direction of switch? And by the semantic predictability of the word at the switch?

Linguistics

Author/Lead: Chia-Hsuan Liao
Non-ARHU Contributor(s): Shiao-Hui Chan
Dates:
Language switching is common in bilingual processing, and it has been repeatedly shown to induce processing costs. However, only a handful of studies have examined such costs at the sentence level, with a limited few among them having incorporated factors extensively studied in monolingual sentence processing, such as semantic expectedness. Using the event related potentials (ERP) technique, this study aimed at exploring whether switching costs were modulated by (1) switching directions, when switching happens between languages of different dominance, and by (2) semantic expectedness, as indicated by cloze probability. Twenty-two Mandarin-Taiwanese early bilinguals, with Mandarin being their dominant and Taiwanese their non-dominant language, participated in the study. They were instructed to listen to the stimuli attentively and to perform a word memory recognition task in 20% of the trials. The results showed that switching induced an LPC effect, suggesting that switched elements were harder to be integrated. More importantly, switching from the dominant to the non-dominant language demanded extra effort than switching in the other direction, as reflected by the PMN (detection of an unexpected sound), the N400 (indication of lexical access difficulty) and the frontal negativity (inhibition of the pre-activated representations), revealing that the dominant language provides better prediction of the upcoming word. Also, cloze probability interacted with switching, but only at an early stage, suggesting that semantic expectedness did not enduringly modulate the switching cost. Our results generally supported predictions from the Bilingual Interactive Activation Plus model (BIA+ model, Dijkstra & van Heuven, 2002), showing that language use and sentence context can affect lexical processing in bilinguals.

On experiencers and minimality

On psych-verbs and experiencers in Brazilian Portugese.

Linguistics

Non-ARHU Contributor(s): Carolina Petersen
Dates:
This dissertation is concerned with experiencer arguments, and what they tell us about the grammar. There are two main types of experiencers I discuss: experiencers of psychological verbs and experiencers of raising constructions. I question the notion of ‘experiencers’ itself; and explore some possible accounts for the ‘psych-effects’. I argue that the ‘experiencer theta role’ is conceptually unnecessary and unsustained by syntactic evidence. ‘Experiencers’ can be reduced to different types of arguments. Taking Brazilian Portuguese as my main case study, I claim that languages may grammaticalize psychological predicates and their arguments in different ways. These verb classes exist in languages independently, and the psych-verbs behavior can be explained by the argument structure of the verbal class they belong to. I further discuss experiencers in raising structures, and the defective intervention effects triggered by different types of experiencers (e.g., DPs, PPs, clitics, traces) in a variety of languages. I show that defective intervention is mostly predictable across languages, and there’s not much variation regarding its effects. Moreover, I argue that defective intervention can be captured by a notion of minimality that requires interveners to be syntactic objects and not syntactic occurrences (a chain, and not a copy/trace). The main observation is that once a chain is no longer in the c-command domain of a probe, defective intervention is obviated, i.e., it doesn’t apply. I propose a revised version of the Minimal Link Condition (1995), in which only syntactic objects may intervene in syntactic relations, and not copies. This view of minimality can explain the core cases of defective intervention crosslinguistically.

Read More about On experiencers and minimality

Locality and Word Order in Active Dependency Formation in Bangla

In real-time comprehension, people are eager to relate question words like "what" to the nearest possible predicate. But is it strurctural or linear nearness that matters? The two possibilities can be distinguished in Bangla.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Dustin A. Chacón, Mashrur Imtiaz, Shirsho Dasgupta, Sikder M. Murshed, Mina Dan
Dates:
Research on filler-gap dependencies has revealed that there are constraints on possible gap sites, and that real-time sentence processing is sensitive to these constraints. This work has shown that comprehenders have preferences for potential gap sites, and immediately detect when these preferences are not met. However, neither the mechanisms that select preferred gap sites nor the mechanisms used to detect whether these preferences are met are well-understood. In this paper, we report on three experiments in Bangla, a language in which gaps may occur in either a pre-verbal embedded clause or a post-verbal embedded clause. This word order variation allows us to manipulate whether the first gap linearly available is contained in the same clause as the filler, which allows us to dissociate structural locality from linear locality. In Experiment 1, an untimed ambiguity resolution task, we found a global bias to resolve a filler-gap dependency with the first gap linearly available, regardless of structural hierarchy. In Experiments 2 and 3, which use the filled-gap paradigm, we found sensitivity to disruption only when the blocked gap site is both structurally and linearly local, i.e., the filler and the gap site are contained in the same clause. This suggests that comprehenders may not show sensitivity to the disruption of all preferred gap resolutions.

Read More about Locality and Word Order in Active Dependency Formation in Bangla

Parsing, generation and grammar

Shota Momma on sentence planning and production, arguing that the same processes of structure-building are used here as in comprehension

Linguistics

Non-ARHU Contributor(s): Shota Momma
Dates:
Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object-initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each.

Read More about Parsing, generation and grammar