Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

The Interpretation of Plural Morphology and (Non-)Obligatory Number Marking: An Argument from Artificial Language Learning

An artificial language study on the meaning of plural morphology, and how this might be learned.

Linguistics

Contributor(s): Adam Liter
Non-ARHU Contributor(s):

Christopher Heffner, Cristina Schmitt

Dates:
Publisher: Journal Language Learning and Development 13(4)

We present an artificial language experiment investigating (i) how speakers of languages such as English with two-way obligatory distinctions between singular and plural learn a system where singular and plural are only optionally marked, and (ii) how learners extend their knowledge of the plural morpheme when under the scope of negation without explicit training. Production and comprehension results suggest that speakers of English did learn a system with only optional marking of number. Additionally, subjects did not accept an inclusive (“one or more than one”) interpretation of the plural when under the scope of negation, as in their native language, but rather assigned it an exclusive (“more than one”) interpretation. The results are consistent with the hypothesis that the meaning assigned to plural morphology is sensitive to the architecture of the system. In a binary number system with obligatory number marking, plural morphology can sometimes receive an inclusive interpretation. However, in a system where plural marking is never obligatory, plural morphology has an exclusive interpretation. Learning the morphology of a language is more than just learning morphological forms and their distributions. It also entails learning how these forms partition the semantic space and how they are organized with respect to each other in different contexts. An interesting instance of morphological learning is the acquisition of number. Number systems differ cross-linguistically along a series of dimensions, beyond the simple surface distinctions in the numbers and types of morphological forms in play within a language (cf. Corbett, 2001). In some languages, number information is an obligatory feature of a noun phrase; in other languages, number is only optionally present; and in yet others, number information is obligatory in some types of noun phrases but not others. The interpretation of different morphemes also varies cross-linguistically. In some languages, plural morphology can be interpreted as meaning “one or more than one” (e.g., English); but, in other languages, the interpretation is always “more than one” (e.g., Korean). But how do these different interpretations arise? One possibility is that the interpretations of the different number morphemes within a linguistic system are simply the result of arbitrary pairings between meanings and forms. An alternative possibility is that their interpretations are a necessary consequence of the interaction between the learning system and properties of the input, such as the number of distinctions between number morphemes and the obligatoriness (or lack thereof) of number morphology. Unfortunately, it is difficult to distinguish between these two possibilities in a natural setting, since languages are never minimal pairs of one another. This makes it impossible to make proper comparisons without results becoming muddled by other differences that must also be learned and may interfere with number. In this article, our goal is to contribute to the debate of how learners decide the underlying meaning of particular pieces of number morphology. To that effect, we constructed an artificial language to examine how speakers of English, which obligatorily encodes number in the noun phrase, learn a language where number is only optionally marked on the noun phrase. We ask two questions. First, can speakers of a language that makes an obligatory distinction between singular and plural learn a system in which number is only optionally encoded without regularizing the system to something more like their native language? Second, if they can learn such a system, does the morphological partition of the new language shift how the plural morphology of the system is interpreted? Specifically, since English is a language in which the plural can sometimes be interpreted as meaning “one or more than one” (more on this below), will learners retain the possibility of this interpretation or not? We find that English speakers do learn a system with optional number marking and are able to treat number-neutral noun phrases as compatible with both plural and singular interpretations. Furthermore, the results are consistent with the hypothesis that learners do not treat the plural marker in this language as meaning “one or more than one”, as they do in some contexts in English, but rather interpret it as meaning “more than one”. Taken together, these results suggest that the differences in interpretation of the plural morphemes cross-linguistically may depend on properties of the available alternative in the input and/or the learning system and are therefore not just an arbitrary pairing of form and meaning. This article proceeds as follows. First, we describe some properties of English-like number systems and other types of number systems that served as models for the artificial language created. Next, we discuss different hypotheses and their predictions for our experiment. The next section presents the study and results, and the last section concludes with a general discussion.

Read More about The Interpretation of Plural Morphology and (Non-)Obligatory Number Marking: An Argument from Artificial Language Learning

Memory retrieval in parsing and interpretation

Errors in number-agreement may initially seem acceptable. This reveals the structure of memory for linguistic context, argues Zoe Schlueter in this dissertation.

Linguistics

Non-ARHU Contributor(s): Zoe Schlueter
Dates:
This dissertation explores the relationship between the parser and the grammar in error-driven retrieval by examining the mechanism underlying the illusory licensing of subject-verb agreement violations (‘agreement attraction’). Previous work motivates a two-stage model of agreement attraction in which the parser predicts the verb’s number and engages in retrieval of the agreement controller only when it detects a mismatch between the prediction and the bottom-up input (Wagers, Lau & Phillips, 2009; Lago, Shalom, Sigman, Lau & Phillips, 2015). It is the second stage of retrieval and feature-checking that is thought to be error-prone, resulting in agreement attraction. Here we investigate two central questions about the processing system that underlies this profile. First, to what extent does error-driven retrieval end up altering the structural representation of the sentence, as compared to an independent feature-checking process that can result in global inconsistencies? Using a novel dual-task paradigm combining self-paced reading and a speeded forced choice task, we show that comprehenders do not misinterpret the attractor as the subject in agreement attraction. This indicates that the illusory licensing reflects a low-level number rechecking process that does not lead to restructuring. Second, what is the relationship between the information guiding the retrieval process and the terms that define agreement in the grammar? In a series of speeded acceptability judgment and self-paced reading experiments, we demonstrate that the number cue in error-driven retrieval is as abstract as the terms in which agreement is stated in the grammar, and that semantic features not relevant to the dependency in the grammar are not used to guide retrieval of the agreement controller. However, data from advanced Chinese learners of English suggests that it is not the case that all features relevant to the grammatical dependency will necessarily be used as retrieval cues. Taken together, these results suggest that the feature-checking repair mechanism follows grammatical principles but can result in a final structural representation of the sentence that is inconsistent with the grammar.

Read More about Memory retrieval in parsing and interpretation

The role of input in discovering presupposition triggers: Figuring out what everybody already knew

How do children learn that uses of "know" generally take for granted that what is known is true? Rachel Dudley finds a surprising answer.

Linguistics

Non-ARHU Contributor(s): Rachel Dudley
Dates:
This dissertation focuses on when and how children learn about the meanings of the propositional attitude verbs "know" and "think". "Know" and "think" both express belief. But they differ in their veridicality: "think" is non-veridical and can report a false belief; but "know" can only report true beliefs because it is a veridical verb. Furthermore, the verbs differ in their factivity: uses of "x knows p", but not uses of "x thinks p", typically presuppose the truth of "p", because "know" is factive and "think" is not. How do children figure out these subtle differences between the verbs, given that they are so similar in the grand scheme of word meaning? Assuming that this consists in figuring out which of an existing store of mental state concepts (such as belief) to map to each word, this dissertation highlights the role of children's linguistic experiences, or input, with the verbs in homing in on an adult-like understanding of them. To address the when question, this dissertation uses behavioral experiments to test children's understanding of factivity and show that some children can figure out the contrast by their third birthday, while other children still have not figured it out by 4.5 years of age. This is earlier than was once thought, but means that there is a lot of individual variation in age of acquisition that must be explained. And it means that children do not just get better at the contrast as they get older, which leaves room for us to ask what role linguistic experiences play, if we can explore how these experiences are related to the variation in when children uncover the contrast. In order to address the how question, the dissertation lays out potential routes to uncovering the contrast via observing direct consequences of it or via syntactic and pragmatic bootstrapping approaches which exploit indirect consequences of the contrast. After laying out these potential routes, the dissertation uses corpus analyses to provide arguments for which routes are most likely, given children's actual experiences with the verbs. In particular, trying to track the direct consequences of the contrast will not get the learner very far. But alternative routes that rely on indirect consequences such as the syntactic distributions of the verbs or their discourse functions, provide clear signal about the underlying contrast. Finally, the dissertation discusses the consequences of this picture for the semantic representation of "know" and "think", as well as the linguistic, conceptual, and socio-pragmatic competence that children must bring to the table in order to uncover the contrast.

Read More about The role of input in discovering presupposition triggers: Figuring out what everybody already knew

When does ellipsis occur, and what is elided?

A new theory on the syntax of ellipsis.

Linguistics

Non-ARHU Contributor(s): Dongwoo Park
Dates:
This dissertation is concerned with how elliptical sentences are generated. To be specific, I investigate when and in what module ellipsis occurs, and what is elided as a result of ellipsis. With regard to the first research question, I propose that XP ellipsis occurs as soon as all the featural requirements of the licensor of XP ellipsis are satisfied during the derivation, rather than in the other modules. An important consequence of this proposal is that the point of XP ellipsis can vary depending on the derivational point where all the featural requirements of the licensor are satisfied in narrow syntax. Concerning the second research question, I suggest that ellipsis is a syntactic operation that eliminates phonological feature matrices of lexical items inside the ellipsis site, preserving the formal feature matrices. Segmental content (i.e. phonological features) is inserted into the phonological feature matrices when lexical items are sent to PF after Spell-out. This insertion does not apply to lexical items whose phonological feature matrices are eliminated, since there is no appropriate venue which segmental content is inserted into. Thus, they are not pronounced. This implies that even though narrow syntax cannot look into the information of the segmental content inside the phonological feature matrices, it can make reference to the phonological feature matrices in lexical items. This proposal is supported by the fact that elements whose phonological feature matrices have been eliminated can take part in further formal operations that occur after ellipsis, since they still contain formal features. However, unlike the other lexical items, elided interrogative wh-phrases do not seem to participate in formal operation occurring after ellipsis. In order to resolve this puzzle, I suggest a prosodic requirement questions must obey, adopting and modifying Richards’ (2016) Contiguity Theory. Standard English copular phrase ellipsis is mainly used to develop the present theory of ellipsis. Cross-linguistic evidence from Indian Vernacular English, Belfast English, Korean, Farsi, British English, and Dutch data is also provided to argue that the present theory of ellipsis is not restricted to English.

Read More about When does ellipsis occur, and what is elided?

Computational phonology today

Bill Idsardi and Jeff Heinz highlight important aspects of today's computational phonology.

Linguistics

Contributor(s): William Idsardi
Dates:
Broadly speaking, computational phonology encompasses a variety of techniques and goals (see Daland 2014 for a survey). In this introduction we would like to highlight three aspects of current work in computationalphonology: data science and model comparison, modelling phonological phenomena using computational simulations, and characterising the computational nature of phonological patterning with theorems and proofs. Papers in this thematic issue illustrate all three of these trends, and sometimes more than one of them. The way we group them in this introduction is meant to highlight the similarities between them, and not to diminish the importance of their other contributions. As we discuss these areas, we also highlight important conceptual issues that we believe are often overlooked.

Read More about Computational phonology today

Phonemes: Lexical access and beyond

A defense of the central role of phonemes in phonology, contrary to the current mainstream.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Nina Kazanina, Jeffrey S. Bowers
Dates:
Phonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are ‘segment-sized’ (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.g. features or syllables) and others rejecting abstract codes in favour of representations that encode detailed acoustic properties of the stimulus. The phoneme hypothesis is the minority view today. We defend the phoneme hypothesis in two complementary ways. First, we show that rejection of phonemes is based on a flawed interpretation of empirical findings. For example, it is commonly argued that the failure to find acoustic invariances for phonemes rules out phonemes. However, the lack of invariance is only a problem on the assumption that speech perception is a bottom-up process. If learned sublexical codes are modified by top-down constraints (which they are), then this argument loses all force. Second, we provide strong positive evidence for phonemes on the basis of linguistic data. Almost all findings that are taken (incorrectly) as evidence against phonemes are based on psycholinguistic studies of single words. However, phonemes were first introduced in linguistics, and the best evidence for phonemes comes from linguistic analyses of complex word forms and sentences. In short, the rejection of phonemes is based on a false analysis and a too-narrow consideration of the relevant data.

Read More about Phonemes: Lexical access and beyond

The role of incremental parsing in syntactically conditioned word learning

The girl is tapping with the tig. If you don't know what "tig" means, you'll look to what the girl is using to tap. And so will even very young children.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Aaron Steven White, Rebecca Baier
Dates:
In a series of three experiments, we use children’s noun learning as a probe into their syntactic knowledge as well as their ability to deploy this knowledge, investigating how the predictions children make about upcoming syntactic structure change as their knowledge changes. In the first two experiments, we show that children display a developmental change in their ability to use a noun’s syntactic environment as a cue to its meaning. We argue that this pattern arises from children’s reliance on their knowledge of verbs’ subcategorization frame frequencies to guide parsing, coupled with an inability to revise incremental parsing decisions. We show that this analysis is consistent with the syntactic distributions in child-directed speech. In the third experiment, we show that the change arises from predictions based on verbs’ subcategorization frame frequencies.

Read More about The role of incremental parsing in syntactically conditioned word learning

Looking forwards and backwards: The real-time processing of Strong and Weak Crossover

Dave, Jeff and Colin show that we can make rapid use of Principle C and c-command information to constrain retrieval of antecedents in online interpretation of pronouns.

Linguistics

Non-ARHU Contributor(s): Dave Kush
Dates:
We investigated the processing of pronouns in Strong and Weak Crossover constructions as a means of probing the extent to which the incremental parser can use syntactic information to guide antecedent retrieval. In Experiment 1 we show that the parser accesses a displaced wh-phrase as an antecedent for a pronoun when no grammatical constraints prohibit binding, but the parser ignores the same wh-phrase when it stands in a Strong Crossover relation to the pronoun. These results are consistent with two possibilities. First, the parser could apply Principle C at antecedent retrieval to exclude the wh-phrase on the basis of the c-command relation between its gap and the pronoun. Alternatively, retrieval might ignore any phrases that do not occupy an Argument position. Experiment 2 distinguished between these two possibilities by testing antecedent retrieval under Weak Crossover. In Weak Crossover binding of the pronoun is ruled out by the argument condition, but not Principle C. The results of Experiment 2 indicate that antecedent retrieval accesses matching wh-phrases in Weak Crossover configurations. On the basis of these findings we conclude that the parser can make rapid use of Principle C and c-command information to constrain retrieval. We discuss how our results support a view of antecedent retrieval that integrates inferences made over unseen syntactic structure into constraints on backward-looking processes like memory retrieval.

Read More about Looking forwards and backwards: The real-time processing of Strong and Weak Crossover

Split ergativity is not about ergativity

Split ergativity is an epiphenomenon, argues Maria Polinsky.

Linguistics

Contributor(s): Omer Preminger
Non-ARHU Contributor(s): Jessica Coon
Dates:
Publisher: Oxford University Press
This chapter argues that split ergativity is epiphenomenal, and that the factors which trigger its appearance are not limited to ergative systems in the first place. In both aspectual and person splits, the split is the result of a bifurcation of the clause into two distinct case/agreement domains, which renders the clause structurally intransitive. Since intransitive subjects do not appear with ergative marking, this straightforwardly accounts for the absence of ergative morphology. Crucially, such bifurcation is not specific to ergative languages; it is simply obfuscated in nominative-accusative environments because there, by definition, transitive and intransitive subjects pattern alike. The account also derives the universal directionality of splits, by linking the structure that is added to independent facts: the use of locative constructions in nonperfective aspects (Bybee et al. 1994, Laka 2006, Coon 2013), and the requirement that 1st/2nd person arguments be structurally licensed (Bejar & Rezac 2003, Baker 2008, 2011, Preminger 2011, 2014).

Antipassive

A handbook chapter on Antipassive constructions: intransitive clauses where an oblique dependent corresponds to the direct object in a transitive with the same verb.

Linguistics

Contributor(s): Maria Polinsky
Dates:
Publisher: Oxford University Press
A handbook chapter on Antipassive constructions: intransitive clauses where an oblique dependent corresponds to the direct object in a transitive with the same verb.