Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

The semantics and pragmatics of belief reports in preschoolers

Children under 4 respond in nonadultlike ways to uses of verbs like "think". Shevaun, Valentine and Jeff argue that this arise from pragmatic difficulty understanding the relevance of belief, rather than from conceptual or semantic immaturity.

Linguistics

Non-ARHU Contributor(s): Shevaun Lewis
Dates:
Children under 4 years have been claimed to lack adult-­like semantic representations of belief verbs like think. Based on two experiments involving a truth-­value judgment task, we argue that 4-­year olds' apparently deviant interpretations arise from pragmatic difficulty understanding the relevance of belief, rather than from conceptual or semantic immaturity.  

A single stage approach to learning phonological categories: Insights from Inuktitut

Much research presumes that we acquire phonetic categories before abstracting phonological categories. Ewan Dunbar argues that this two-step progression is unnecessary, with a Bayesian model for the acquisition of Inuktitut vowels.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Brian W Dillion, Ewan Dunbar,
Dates:
We argue that there is an implicit view in psycholinguistics that phonological acquisition is a 'two-stage' process: phonetic categories are first acquired, and then subsequently mapped onto abstract phoneme categories. We present simulations that suggest two problems with this view: first, the learner might mistake the phoneme-level categories for phonetic-level categories and thus be unable to learn the relationships between phonetic-level categories; on the other hand, the learner might construct inaccurate phonetic-level representations that prevent it from finding regular relations among them. We suggest an alternative conception of the phonological acquisition problem that sidesteps this apparent inevitability, and present a Bayesian model that acquires phonemic categories in a single stage. Using acoustic data from Inuktitut, we show that this model reliably converges on a set of phoneme-level categories and phonetic-level relations among subcategories, without making use of a lexicon.

On Headless XP Movement/Ellipsis

Kenshi Funakoshi adapts the theory of movement in syntax.

Linguistics

Non-ARHU Contributor(s): Kenshi Funakoshi
Dates:
I make two proposals in this article: (a) an economy condition on the operation Copy, which states that Copy should apply to as small an element as possible, and (b) the “two types of head movement” hypothesis, which states that Universal Grammar allows head movement via substitution as well as head movement via adjunction. I argue that with these proposals, we can not only explain two generalizations about what I call headless XPs, but also attribute crosslinguistic variation in the applicability of these generalizations to parameters that are responsible for the availability of multiple specifiers.

Read More about On Headless XP Movement/Ellipsis

Null Complement Anaphors as definite descriptions

"Ron won" is less like "Ron won it" than it is like "Ron won the contest."

Linguistics

Contributor(s): Alexander Williams
Dates:
This paper develops the observation that, for many predicates, Null Complement Anaphora (NCA) is like anaphora with a descriptively empty definite description (Condoravdi & Gawron 1996, Gauker 2012). I consider how to distinguish this sort of NCA from pronouns theoretically, and then observe an unnoticed exception to the pattern. For verbs like notice, NCA is neither like a definite description nor like a pronoun, raising a new puzzle of how to represent it.

On restructuring infinitives in Japanese: Adjunction, clausal architecture, and phases

Postdoc Masahiko Takahashi investigates the variety of restructuring verbs in Japanese.

Linguistics

Non-ARHU Contributor(s): Masahiko Takahashi
Dates:
This paper investigates the syntax of Japanese restructuring verbs and makes two major claims: (i) there are (at least) three types of restructuring infinitives in Japanese, which is consistent with Wurmbrand's (2001) approach to restructuring infinitives and (ii) there is a general ban on adjunction to complements of lexical restructuring verbs, which is best explained by an interaction of spell-out domains and Case-valuation. It is also shown that this ban regulates adverb insertion, adjective insertion, and quantifier raising.

Read More about On restructuring infinitives in Japanese: Adjunction, clausal architecture, and phases

Conservativity and Learnability of Determiners

Tim Hunter and Jeff Lidz find evidence that 4- to 5-year olds expect determiner meanings to be Conservative

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter
Dates:
A striking cross-linguistic generalization about the semantics of determiners is that they never express non-conservative relations. To account for this one might hypothesize that the mechanisms underlying human language acquisition are unsuited to non-conservative determiner meanings. We present experimental evidence that 4- and 5-year-olds fail to learn a novel non-conservative determiner but succeed in learning a comparable conservative determiner, consistent with the learnability hypothesis.

Read More about Conservativity and Learnability of Determiners

Embedding epistemic modals in English: A corpus-based study

A corpus study on the distribution of epistemic modals, targeted at the question of whether such modals do or do not contribute to the content of their sentences.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s): Alexis Wellwood
Dates:
The question of whether epistemic modals contribute to the truth conditions of the sentences they appear in is a matter of active debate in the literature. Fueling this debate is the lack of consensus about the extent to which epistemics can appear in the scope of other operators. This corpus study investigates the distribution of epistemics in naturalistic data. Our results indicate that they do embed, supporting the view that they contribute semantic content. However, their distribution is limited, compared to that of other modals. This limited distribution seems to call for a nuanced account: while epistemics are semantically contentful, they may require special licensing conditions.

Read More about Embedding epistemic modals in English: A corpus-based study

Young Children's Understanding of "more" and Discrimination of Number and Surface Area

How do three-year-olds understand "more"? This study suggests they use Approximate Number System in verifying claims with "more" and a count noun, and an Approximate Area System with mass nouns.

Linguistics

Non-ARHU Contributor(s): Darko Odic, Tim Hunter, Justin Halberda
Dates:
The psychology supporting the use of quantifier words (e.g., “some,” “most,” “more”) is of interest to both scientists studying quantity representation (e.g., number, area) and to scientists and linguists studying the syntax and semantics of these terms. Understanding quantifiers requires both a mastery of the linguistic representations and a connection with cognitive representations of quantity. Some words (e.g., “many”) refer to only a single dimension, whereas others, like the comparative “more,” refer to comparison by numeric (“more dots”) or nonnumeric dimensions (“more goo”). In the present work, we ask 2 questions. First, when do children begin to understand the word “more” as used to compare nonnumeric substances and collections of discrete objects? Second, what is the underlying psychophysical character of the cognitive representations children utilize to verify such sentences? We find that children can understand and verify sentences including “more goo” and “more dots” at around 3.3 years—younger than some previous studies have suggested—and that children employ the Approximate Number System and an Approximate Area System in verification. These systems share a common underlying format (i.e., Gaussian representations with scalar variability). The similarity in the age of onset we find for understanding “more” in number and area contexts, along with the similar psycho- physical character we demonstrate for these underlying cognitive representations, suggests that children may learn “more” as a domain-neutral comparative term.

Read More about Young Children's Understanding of "more" and Discrimination of Number and Surface Area

Derivational order in syntax: Evidence and architectural consequences

A précis of the evidence for left‐to‐right derivations in syntax, and how this relates to the nature of real‐time mechanisms for building linguistic structure.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Shevaun Lewis
Dates:
Publisher: Elsevier
Standard
 generative
 grammars
 describe
 language
 in
 terms
 that
 appear
 distant
 from
 considerations
 of
 everyday,
 real‐time
 language
 processes.
 To
 some
 this
 is
 a
 critical
 flaw,
 while
 to
 others 
this 
is 
a 
clear 
virtue.
 One
 type 
of 
generative 
grammar 
defines 
a 
well‐formed
 sentence
 as 
a 
static, 
structured
 representation 
that 
simultaneously
 satisfies 
all 
relevant 
constraints 
of 
the
 language,
 with 
no 
regard 
to 
how
 the 
representation 
is 
assembled 
(e.g., 
Sag,
 Wasow, 
& 
Bender,
 2003).
 Another
 type
 of
 generative
 grammar
 defines
 a
 well‐formed
 sentence
 as
 a
 derivation,
 or
 sequence
 of
 representations,
 that
 describes
 how
 the
 sentence
 is
 gradually
 assembled,
 often
 including 
various 
transformations
 that
 move
 words
 or 
phrases 
from
 one 
position
 to 
another 
in 
a
 structure.
 In
 the
 most
 popular
 current
 version
 of
 the
 derivational
 approach,
 derivations
 proceed
 ‘upwards’,
 starting
 from
 the
 most
 deeply
 embedded
 terminal
 elements
 in
 the
 sentence,
 which
 are
 often
 towards
 the
 right
 of
 a
 sentence
 (e.g.,
 Chomsky,
 1995;
 Carnie,
 2006).
 Such
 derivations
 tend
 to
 proceed
 in
 a
 right‐to‐left
 order,
 which
 is
 probably
 the
 opposite
 of
 the
 order 
in 
which 
sentences 
are 
assembled
 in 
everyday 
tasks 
such
 as 
speaking 
and
 understanding.
 Since
 these
 theories
 make
 no
 claim
 to
 being
 accounts
 of
 such
 everyday
 processes,
 the
 discrepancy
 causes
 little
 concern
 among
 the
 theories'
 creators.
 Generative
 grammars
 are
 typically
 framed
 as
 theories
 of
 speakers’
 task‐independent
 knowledge
 of
 their
 language,
 and
 these
 are
 understood
 to
 be
 distinct
 from
 theories
 of
 how
 specific
 communicative
 tasks
 might
 put
 that 
knowledge
 to 
use.
 

Set
 against
 this
 background
 are
 a
 number
 of
 recent
 proposals
 that
 various
 linguistic
 phenomena
 can
 be
 better
 understood
 in
 terms
 of
 derivations
 that
 incrementally
 assemble
 structures
 in
 a
 (roughly)
 left‐to‐right
 order.
 One
 can
 evaluate
 these
 proposals
 based
 simply
 on
 how
 well
 they
 capture
 the
 acceptability
 judgments
 that
 they
 aim
 to
 explain,
 i.e.,
 standard
 conditions
 of
 'descriptive
 adequacy'.
 But
 it
 is
 hard
 to
 avoid
 the
 question
 of
 whether
 it
 is
 mere
 coincidence
 that
 left‐to‐right
 derivations
 track
 the
 order
 in
 which
 sentences
 are
 spoken
 and
 understood.
 It
 is
 also
 natural
 to
 ask
 how
 left‐to‐right
 derivations
 impact
 the
 psychological
 commitments
 of
 grammatical
 theories.
 Are
 they
 procedural
 descriptions
 of
 how
 speakers
 put
 together 
sentences 
in 
real 
time 
(either 
in 
comprehension 
or 
in 
production)? 
 Do 
they 
amount 
to
 a
 retreat
 from
 linguists’
 traditional
 agnosticism
 about
 ‘performance
 mechanisms’?
 These
 are
 questions
 about 
what 
a 
grammatical 
theory 
is 
a 
theory 
of, 
and
 they 
are 
the 
proverbial 
elephant
 in
 the
 room
 in
 discussions
 of
 left‐to‐right
 derivations
 in
 syntax,
 although
 the
 issues
 have
 not
 been
 explored 
in 
much 
detail. 
Here 
we 
summarize 
the
 current 
state 
of 
some
 of
 the 
evidence 
for
 left‐to‐right
derivations 
in 
syntax, 
and 
how 
this 
relates 
to 
a 
number 
of 
findings 
by 
our 
group 
and
 others
 on 
the 
nature 
of 
real‐time
 structure 
building 
mechanisms. 
Some
 of 
these 
questions 
have
 been
 aired
 in
 previous
 work
 (e.g.,
 Phillips
 1996,
 2004),
 but
 we
 have
 come
 to
 believe
 that
 the
 slogan
 from
 that
 earlier
 work
 (“the
 parser
 is
 the
 grammar”)
 is
 misleading
 in
 a
 number
 of
 respects, 
and
 we 
offer 
an 
updated
 position 
here.


Input and Intake in Language Acquisition

Acquiring a grammar involves representing the environment and making statistical inferences within a space of linguistic hypotheses. Annie illustrates with experimental, computational and corpus studies of children acquiring Tsez, Norwegian and English.

Linguistics

Non-ARHU Contributor(s): Ann C. Gagliardi
Dates:
This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from the intake encoded by the learner, and looking at how a statistical inference mechanism, coupled with a well de fined linguistic hypothesis space could lead a learn to infer the native grammar of their native language. This work draws on experimental work, corpus analyses and computational models of Tsez, Norwegian and English children acquiring word meanings, word classes and syntax to highlight the need for an appropriate encoding of the linguistic input in order to solve any given problem in language acquisition.