Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

There is a simplicity bias when generalising from ambiguous data

How do phonological learners choose among generalizations of differing complexity?

Linguistics

Contributor(s): Adam Liter
Non-ARHU Contributor(s):

Karthik Durvasula

Dates:
Publisher: Phonology 37(2)

How exactly do learners generalize in the face of ambiguous data? While there has been a substantial amount of research studying the biases that learners employ, there has been very little work on what sorts of biases are employed in the face of data that is ambiguous between phonological generalizations with different degrees of complexity. In this article, we present the results from three artificial language learning experiments that suggest that, at least for phonotactic sequence patterns, learners are able to keep track of multiple generalizations related to the same segmental co-occurrences; however, the generalizations they learn are only the simplest ones consistent with the data.

Read More about There is a simplicity bias when generalising from ambiguous data

Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis

Experimental evidence supports an analysis of Null Object constructions in Korean as instances of object ellipsis.

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s):

Chung-hye Han, Kyeong-min Kim, Keir Moulton

Dates:

Null object (NO) constructions in Korean and Japanese have receiveddifferent accounts: as (a) argument ellipsis (Oku 1998, S. Kim 1999, Saito 2007, Sakamoto 2015), (b) VP-ellipsis after verb raising (Otani and Whitman 1991, Funakoshi 2016), or (c) instances of base-generated pro (Park 1997, Hoji 1998, 2003). We report results from two experiments supporting the argument ellipsis analysis for Korean. Experiment 1 builds on K.-M. Kim and Han’s (2016) finding of interspeaker variation in whether the pronoun ku can be bound by a quantifier. Results showed that a speaker’s acceptance of quantifier-bound ku positively correlates with acceptance of sloppy readings in NO sentences. We argue that an ellipsis account, in which the NO site contains internal structure hosting the pronoun, accounts for this correlation. Experiment 2, testing the recovery of adverbials in NO sentences, showed that only the object (not the adverb) can be recovered in the NO site, excluding the possibility of VP-ellipsis. Taken together, our findings suggest that NOs result from argument ellipsis in Korean.

Read More about Null Objects in Korean: Experimental Evidence for the Argument Ellipsis Analysis

The structure of Polish numerically-quantified expressions

What is the syntax of "five witches" in Polish, with genitive on "witches", accusative on "five", and third-singular-neuter agreement on a verb? Paulina Lyskawa gives a new answer that manages to preserve ordinary theories of case and agreement.

Linguistics

Contributor(s): Paulina Lyskawa
Non-ARHU Contributor(s): Paulina Lyskawa
Dates:
Cross-linguistically, numerically-quantified expressions vary in terms of their internal syntactic structure (e.g. the category of the numeral, its position in the nominal projection) as well as interaction with the external syntax (e.g. occurring in the subject positions, determining agreement and concord). Here, I investigate Polish numerically-quantified expressions of the 5+ type, such as pięć czarownic ‘five witches’, focusing on three morphosyntactic properties: the genitive case on the quantified noun, the accusative case on the numeral, and the occurrence of 3sg neuter verbal agreement. I argue that all of these properties can be captured within existing theories of case and agreement, in terms of a null head that takes the quantified noun phrase as its complement, and a numeral phrase as its specifier. Genitive on the noun is structural, accusative on the numeral is licensed by a null preposition, and default agreement is a result of the case-discriminating nature of verbal agreement. This proposal has implications for the broader theory of agreement and case assignment in Slavic languages and beyond.

Read More about The structure of Polish numerically-quantified expressions

Headedness and the Lexicon: The Case of Verb-to-Noun Ratios

Is there a correlation between the relative size of a lexical class, such as verbs in relation to nouns, and whether members of that class precede or follow a dependent in phrases they head? This paper finds that there is.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s): Lilla Magyar
Dates:
This paper takes a well-known observation as its starting point, that is, languages vary with respect to headedness, with the standard head-initial and head-final types well attested. Is there a connection between headedness and the size of a lexical class? Although this question seems quite straightforward, there are formidable methodological and theoretical challenges in addressing it. Building on initial results by several researchers, we refine our methodology and consider the proportion of nouns to simplex verbs (as opposed to light verb constructions) in a varied sample of 33 languages to evaluate the connection between headedness and the size of a lexical class. We demonstrate a robust correlation between this proportion and headedness. While the proportion of nouns in a lexicon is relatively stable, head-final/object-verb (OV)-type languages (e.g., Japanese or Hungarian) have a relatively small number of simplex verbs, whereas head-initial/verb-initial languages (e.g., Irish or Zapotec) have a considerably larger percentage of such verbs. The difference between the head-final and head-initial type is statistically significant. We, then, consider a subset of languages characterized as subject-verb-object (SVO) and show that this group is not uniform. Those SVO languages that have strong head-initial characteristics (as shown by the order of constituents in a set of phrases and word order alternations) are characterized by a relatively large proportion of lexical verbs. SVO languages that have strong head-final traits (e.g., Mandarin Chinese) pattern with head-final languages, and a small subset of SVO languages are genuinely in the middle (e.g., English, Russian). We offer a tentative explanation for this headedness asymmetry, couched in terms of informativity and parsing principles, and discuss additional evidence in support of our account. All told, the fewer simplex verbs in head-final/OV-type languages is an adaptation in response to their particular pattern of headedness. The object-verb/verb-object (OV/VO) difference with respect to noun/verb ratios also reveals itself in SVO languages; some languages, Chinese and Latin among them, show a strongly OV ratio, whereas others, such as Romance or Bantu, are VO-like in their noun/verb ratios. The proportion of nouns to verbs thus emerges as a new linguistic characteristic that is correlated with headedness.

Enough time to get results? An ERP investigation of prediction with complex events

How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? This paper examines the question by comparing two kinds of compound verbs in Mandarin, and neural responses to the following direct object.

Linguistics

Contributor(s): Ellen Lau
Non-ARHU Contributor(s):

Chia-Hsuan Liao (*20)

Dates:

How quickly can verb-argument relations be computed to impact predictions of a subsequent argument? We take advantage of the substantial differences in verb-argument structure provided by Mandarin, whose compound verbs encode complex event relations, such as resultatives (kid bit-broke lip: 'the kid bit his lip such that it broke') and coordinates (store owner hit-scolded employee 'the store owner hit and scolded an employee'). We tested sentences in which the object noun could be predicted on the basis of the preceding compound verb, and used N400 responses to the noun to index successful prediction. By varying the delay between verb and noun, we show that prediction is delayed in the resultative context (broken-BY-biting) relative to the coordinate one (hitting-AND-scolding). These results present a first step towards temporally dissociating the fine-grained subcomputations required to parse and interpret verb-argument relations.

Read More about Enough time to get results? An ERP investigation of prediction with complex events

Syntactic category constrains lexical access in speaking

When we choose which word to speak, do nouns and verbs compete, when the express similar concepts? New evidence says No: syntactic category plays a key role in limiting lexical access.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s):

Shota Momma (*16), Julia Buffinton, Bob Slevc

Dates:

We report two experiments that suggest that syntactic category plays a key role in limiting competition in lexical access in speaking. We introduce a novel sentence-picture interference (SPI) paradigm, and we show that nouns (e.g., running as a noun) do not compete with verbs (e.g., walking as a verb) and verbs do not compete with nouns in sentence production, regardless of their conceptual similarity. Based on this finding, we argue that lexical competition in production is limited by syntactic category. We also suggest that even complex words containing category-changing derivational morphology can be stored and accessed together with their final syntactic category information. We discuss the potential underlying mechanism and how it may enable us to speak relatively fluently.

Read More about Syntactic category constrains lexical access in speaking

Modeling the learning of the Person Case Constraint

Adam Liter and Naomi Feldman show that the Person Case Constraint can be learned on the basis of significantly less data, if the constraint is represented in terms of feature bundles.

Linguistics

Contributor(s): Adam Liter, Naomi Feldman
Dates:

Many domains of linguistic research posit feature bundles as an explanation for various phenomena. Such hypotheses are often evaluated on their simplicity (or parsimony). We take a complementary approach. Specifically, we evaluate different hypotheses about the representation of person features in syntax on the basis of their implications for learning the Person Case Constraint (PCC). The PCC refers to a phenomenon where certain combinations of clitics (pronominal bound morphemes) are disallowed with ditransitive verbs. We compare a simple theory of the PCC, where person features are represented as atomic units, to a feature-based theory of the PCC, where person features are represented as feature bundles. We use Bayesian modeling to compare these theories, using data based on realistic proportions of clitic combinations from child-directed speech. We find that both theories can learn the target grammar given enough data, but that the feature-based theory requires significantly less data, suggesting that developmental trajectories could provide insight into syntactic representations in this domain.

Hope for syntactic bootstrapping

Some mental state verbs take a finite clause as their object, while others take an infinitive, and the two groups differ reliably in meaning. Remarkably, children can use this correlation to narrow down the meaning of an unfamiliar verb.

Linguistics

Contributor(s): Valentine Hacquard, Jeffrey Lidz
Non-ARHU Contributor(s):

Kaitlyn Harrigan (*15)

Dates:
Publisher: Language

We explore children’s use of syntactic distribution in the acquisition of attitude verbs, such as think, want, and hope. Because attitude verbs refer to concepts that are opaque to observation but have syntactic distributions predictive of semantic properties, we hypothesize that syntax may serve as an important cue to learning their meanings. Using a novel methodology, we replicate previous literature showing an asymmetry between acquisition of think and want, and we additionally demonstrate that interpretation of a less frequent attitude verb, hope, patterns with type of syntactic complement. This supports the view that children treat syntactic frame as informative about an attitude verb’s meaning

Read More about Hope for syntactic bootstrapping

Morphology in Austronesian languages

Postdoc Ted Levin and Professor Maria Polinsky provide an overview of morphology in Austronesian languages.

Linguistics

Contributor(s): Maria Polinsky
Non-ARHU Contributor(s): Theodore Levin
Dates:
This is an overview of the major morphological properties of Austronesian languages. We present and analyze data that may bear on the commonly discussed lexical-category neutrality of Austronesian and suggest that Austronesian languages do differentiate between core lexical categories. We address the difference between roots and stems showing that Austronesian roots are more abstract than roots traditionally discussed in morphology. Austronesian derivation and inflexion rely on suffixation and prefixation; some infixation is also attested. Austronesian languages make extensive use of reduplication. In the verbal system, main morphological exponents mark voice distinctions as well as causatives and applicatives. In the nominal domain, the main morphological exponents include case markers, classifiers, and possession markers. Overall, verbal morphology is richer in Austronesian languages than nominal morphology. We also present a short overview of empirically and theoretically challenging issues in Austronesian morphology: the status of infixes and circumfixes, the difference between affixes and clitics, and the morphosyntactic characterization of voice morphology.

Read More about Morphology in Austronesian languages

Epistemic "might": A non-epistemic analysis

What are called epistemic use of "might" in fact express a relation, not to information or knowledge, as is routinely assumed, but to relevant circustances.

Linguistics

Non-ARHU Contributor(s):

Quinn Harr *19

Dates:

A speaker of (1) implies that she is uncertain whether (2), making this use of might “epistemic.” On the received view, the implication is semantic, but in this dissertation I argue that this implication is no more semantic than is the implication that a speaker of (2) believes John to be contagious.

(1) John might be contagious.

(2) John is contagious.

This follows from a new observation: unlike claims with explicitly epistemic locutions, those made with “epistemic” uses of might can be explained only with reference to non-epistemic facts. I conclude that they express a relation, not to relevant information, but instead to relevant circumstances, and that uncertainty is implied only because of how informed speakers contribute to conversations. This conclusion dissolves old puzzles about disagreements and reported beliefs involving propositions expressed with might, puzzles that have been hard for the received view to accommodate. The cost of these advantages is to explain why the circumstantial modality expressed by might is not inherently oriented towards the future, as has been claimed for other circumstantial modalities. But this claim turns out to be false. The correct characterization of the temporal differences reveals that the modality expressed by might relates  to propositions whereas other modalities relate to events. Neither sort is epistemic.

Read More about Epistemic "might": A non-epistemic analysis