Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

The ups and downs of child language : experimental studies on children's knowledge of entailment relations and polarity phenomena

A defense of the Continuity Assumption in language acquisition, based on experiments testing children's understanding of negation, disjunction, universal quantifiers, and negative polarity.

Linguistics

Non-ARHU Contributor(s): Andrea Gualmini
Dates:
Downward Entailment is a semantic property common to many linguistic expressions across natural languages (Ladusaw, 1979). This dissertation takes downward entailment as a yardstick in assessing children’s semantic competence. First, downward entailment is used as a case study for several alternative models of language acquisition, including those recently proposed by Tomasello (2000) and by Pullum and Scholz (2002). According to these researchers, children are initially conservative, and tend to (re)produce linguistic expressions that they have experienced in the input. Even at later stages, when children form generalizations, children’s linguistic generalizations are directly tied to the input, based on domain general learning mechanisms. These models are contrasted with one based on the principles and parameters of Universal Grammar. In an experimental study using the Truth Value Judgment task (Crain and Thornton, 1998), these alternative models are put to a test by investigating a phenomenon that displays a mismatch between the data available to the child and the semantic competence the child acquires, namely the interaction between downward entailment and c-command. In particular, we report the results of an experiment investigating children’s interpretation of the disjunction operator or in sentences in which that operator is c-commanded by negation, such as Winnie the Pooh will not let Eeyore eat the cookie or the cake, and in sentences in which disjunction is only preceded by negation, as in The Karate Man will give the Pooh Bear he could not lift the honey or the donut. Second, children’s knowledge of downward entailment is investigated in order to assess children’s knowledge of quantification. Beginning with Inhelder and Piaget (1964), children have been reported to have problems in interpreting sentences containing the universal quantifier every. These findings have recently been interpreted as showing that children and adults assign different semantic representations to sentences with the universal quantifier every (Philip, 1995; Drozd and van Loosbroek, 1998). A common assumption of these linguistic accounts is that children’s non-adult interpretation of sentences containing every fails to distinguish between the restrictor and the nuclear scope of the quantifier every. A Truth Value Judgment task was designed to evaluate this assumption. The findings, together with the results of previous research, show that children’s knowledge of quantification runs deeper than is anticipated either by recent linguistic accounts of children’s non-adult responses to universally quantified sentences or by input driven models of language development. Children’s adult-like knowledge of downward entailment and of the negative polarity item any stands in contrast with their non-adult interpretation of the positive polarity item some in negative sentences, e.g., The detective didn’t find some guys (see Musolino, 1998). To address this contrast, an experiment was conducted drawing upon the observation that negative statements are generally used to point out discrepancies between the facts and the listener’s expectations, and that this felicity condition was not satisfied in previous studies. The experimental findings show that children’s interpretation of indefinites in negative sentences is fully adult-like when the felicity conditions associated with negative statements are satisfied. The same picture emerges from the findings of a final experiment investigating children’s interpretation of sentences containing multiple scope bearing elements, as in Every farmer didn’t clean some animal. In sum, the experimental findings suggest that even in the domain of semantic competence, there is no reason to assume that child language differsfrom adult language in ways that would exceed the boundary conditions imposed by Universal Grammar, as maintained by the Continuity Assumption (Crain and Thornton, 1998; Pinker, 1984).

Japanese event nouns and their categories

A dissertation on event nouns in Japanese.

Linguistics

Non-ARHU Contributor(s): Masaaki Kamiya
Dates:
A dissertation on event nouns in Japanese.

Processing syntactic complexity : cross-linguistic differences and ERP evidence

An ERP study of processing complexity, cross-linguistically.

Linguistics

Non-ARHU Contributor(s): Ana Cristina de Souza Lima Gouvea
Dates:
An ERP study of processing complexity, cross-linguistically.

The perceptual representation of acoustic temporal structure

A dissertation on the representation of temporal structure in auditory perception.

Linguistics

Non-ARHU Contributor(s): Anthony B. Boemio
Dates:
A dissertation on the representation of temporal structure in auditory perception.

Signs are Single Segments: Phonological Representations and Temporal Sequencing in ASL and Other Sign Languages

What explains differences between the phonologies of spoken versus signed words?

Linguistics

Non-ARHU Contributor(s): Rachel Channon
Dates:
A single segment representation with dynamic features (Oneseg) explains differences between the phonologies of spoken words and signs better than current multiple segments phonological representations of signs (Multiseg). A segment is defined as the largest phonological unit where combinations of features are contrastive, but permutations and repetitions aren’t. Hayes (1993) distinguishes between static features (place, handshape) which don't reference motion, and dynamic features (direction, repetition) which do. Dynamic features are the only way that a single segment representation can sequence motion. Oneseg correctly predicts that number of repetitions is not contrastive in signs, because repetition is the result of a dynamic feature [repeat]. Multiseg incorrectly predicts that number of repetitions should be contrastive. About 50% of all spoken words repeat irregularly (unintended, hiphop); less than 1% repeat rhythmically (tutu, murmur). Non-compound signs never repeat irregularly; about 50% repeat rhythmically. Oneseg correctly predicts repetition in signs based on the probability of combinations including the feature [repeat]; Multiseg correctly predicts repetition in words based on combinations, permutations and repetition of segments. Oneseg correctly predicts that signs never have more than two underlying places. Multiseg predicts signs with any number of places. Some signs with two places allow places to occur in either order; some are ordered by constraints. Oneseg represents both without underlying sequence or redundancy, but Multiseg’s obligatory segmental sequence overgenerates or is redundant. Chapter 5 shows that inflected verbs and classifier predicates aren’t problems for Oneseg because they are predictably iconic. Predictable iconicity is the same across all sign languages, is produced by non-signers, and doesn’t always obey the phonological rules of the language. Lexically iconic elements have the reverse characteristics. Lexically iconic, but not predictably iconic, elements are part of the phonological representation. Chapter 6 proposes possible additional features and hierarchy for Oneseg and shows that the representations produced can be economically sparse by omitting redundant material. I examine the historical assimilation processes in compounds and show that Oneseg explains them.

Syntax unchained

Chains are not syntactic primitives.

Linguistics

Non-ARHU Contributor(s): Hirohisa Kiguchi
Dates:
This thesis is concerned with chains. Chains have been conventionally defined as follows: "(a1,..., an) is a chain only if, for 1 <= i < n, ai and ai+1 are nondistinct, and ai c-commands ai+1" (cf. Chomsky 1986a, Rizzi 1990, Brody 1995 and many others). As Hornstein (1998) points out, though chains were originally not real grammatical objects but mere notation to track the history of movement, they were promoted to a legitimate syntactic tool in no time. This thesis questions chains: Are chains really necessary? What would we lose if we did not have chains as a primitive of the theory of Universal Grammar (=UG)? This thesis especially investigates what we would gain if UG were free from the notion of the chains. The claim of the thesis is that there are good empirical reasons for eliminating chains.

Scope and Specificity in Child Language: A Cross-Linguistic Study on English and Chinese

Children's interpretation of singular indefinites in the context of universal quantifiers or negation, in Mandarin and in English.

Linguistics

Non-ARHU Contributor(s): Yi-ching Su
Dates:
Children's interpretation of singular indefinites in the context of universal quantifiers or negation, in Mandarin and in English.

Thematic Relations between Nouns

A Small Clause analysis of several possessive constructions.

Linguistics

Non-ARHU Contributor(s): Juan Carlos Castillo
Dates:
This dissertation explores some of the traditionally labeled possessive relations, and proposes a basic syntactic structure that underlies them. The two nouns act as subject and predicate in a small clause, dominated by two functional projections, where reference/agreement and contextual restrictions are checked. Looking first at container-content relations, we propose that the container is always a predicate for the content. Because in our system selection is determined in the small clause and agreement is checked in an AgrP, selection and agreement need not be determined by the same noun. Selection also distinguishes between a container and a content reading. The evidence from extraction shows that container readings are more complex than content readings. We propose that the container reading adds a higher small clause whose predicate is the feature number. Number is thus a predicate, which type-lifts mass terms to count nouns, the way classifiers do in languages without number. Evidence from Spanish and Asturian shows a three-way distinction between absence of number (mass terms), singular and plural. We also propose that nouns are not divided into rigid classes, such as mass/count. Rather, any noun may be used as mass or count, depending on whether number is added to its syntactic derivation or not. An analysis of possessor raising to both nominative and dative in Spanish also supports the idea that nouns are not divided into rigid classes with respect to their ability to enter possessive relations. Relations such as part/whole, alienable and inalienable possessions, are all analyzed as small clauses where the possessor is the subject and the possessed is the predicate. Finally, we propose a universal principle: possessor raising can occur in languages that have a structural Case in a v-projection, in addition to the Case checked by the direct object. This predicts that causative verbs in languages with possessor raising should also allow the Case checking of both the object and the subject of an embedded transitive clause. The prediction is borne out, giving rise to four types of languages, according to their Case system.

The Syntax of Gerunds and Infinitives: Subjects, Case and Control

A dissertation on the syntax of gerunds and infinitives

Linguistics

Non-ARHU Contributor(s): Acrisio Magno Gomes Pires
Dates:
A dissertation on the syntax of gerunds and infinitives

Prolific Peripheries: A Radical View from the Left

Clauses comprise three syntactic domains, and movement within a domain is forbidden.

Linguistics

Non-ARHU Contributor(s): Kleanthes K. Grohmann
Dates:
Clauses comprise three syntactic domains, and movement within a domain is forbidden.