Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

On restructuring infinitives in Japanese: Adjunction, clausal architecture, and phases

Postdoc Masahiko Takahashi investigates the variety of restructuring verbs in Japanese.

Linguistics

Non-ARHU Contributor(s): Masahiko Takahashi
Dates:
This paper investigates the syntax of Japanese restructuring verbs and makes two major claims: (i) there are (at least) three types of restructuring infinitives in Japanese, which is consistent with Wurmbrand's (2001) approach to restructuring infinitives and (ii) there is a general ban on adjunction to complements of lexical restructuring verbs, which is best explained by an interaction of spell-out domains and Case-valuation. It is also shown that this ban regulates adverb insertion, adjective insertion, and quantifier raising.

Read More about On restructuring infinitives in Japanese: Adjunction, clausal architecture, and phases

Conservativity and Learnability of Determiners

Tim Hunter and Jeff Lidz find evidence that 4- to 5-year olds expect determiner meanings to be Conservative

Linguistics

Contributor(s): Jeffrey Lidz
Non-ARHU Contributor(s): Tim Hunter
Dates:
A striking cross-linguistic generalization about the semantics of determiners is that they never express non-conservative relations. To account for this one might hypothesize that the mechanisms underlying human language acquisition are unsuited to non-conservative determiner meanings. We present experimental evidence that 4- and 5-year-olds fail to learn a novel non-conservative determiner but succeed in learning a comparable conservative determiner, consistent with the learnability hypothesis.

Read More about Conservativity and Learnability of Determiners

Embedding epistemic modals in English: A corpus-based study

A corpus study on the distribution of epistemic modals, targeted at the question of whether such modals do or do not contribute to the content of their sentences.

Linguistics

Contributor(s): Valentine Hacquard
Non-ARHU Contributor(s): Alexis Wellwood
Dates:
The question of whether epistemic modals contribute to the truth conditions of the sentences they appear in is a matter of active debate in the literature. Fueling this debate is the lack of consensus about the extent to which epistemics can appear in the scope of other operators. This corpus study investigates the distribution of epistemics in naturalistic data. Our results indicate that they do embed, supporting the view that they contribute semantic content. However, their distribution is limited, compared to that of other modals. This limited distribution seems to call for a nuanced account: while epistemics are semantically contentful, they may require special licensing conditions.

Read More about Embedding epistemic modals in English: A corpus-based study

Young Children's Understanding of "more" and Discrimination of Number and Surface Area

How do three-year-olds understand "more"? This study suggests they use Approximate Number System in verifying claims with "more" and a count noun, and an Approximate Area System with mass nouns.

Linguistics

Non-ARHU Contributor(s): Darko Odic, Tim Hunter, Justin Halberda
Dates:
The psychology supporting the use of quantifier words (e.g., “some,” “most,” “more”) is of interest to both scientists studying quantity representation (e.g., number, area) and to scientists and linguists studying the syntax and semantics of these terms. Understanding quantifiers requires both a mastery of the linguistic representations and a connection with cognitive representations of quantity. Some words (e.g., “many”) refer to only a single dimension, whereas others, like the comparative “more,” refer to comparison by numeric (“more dots”) or nonnumeric dimensions (“more goo”). In the present work, we ask 2 questions. First, when do children begin to understand the word “more” as used to compare nonnumeric substances and collections of discrete objects? Second, what is the underlying psychophysical character of the cognitive representations children utilize to verify such sentences? We find that children can understand and verify sentences including “more goo” and “more dots” at around 3.3 years—younger than some previous studies have suggested—and that children employ the Approximate Number System and an Approximate Area System in verification. These systems share a common underlying format (i.e., Gaussian representations with scalar variability). The similarity in the age of onset we find for understanding “more” in number and area contexts, along with the similar psycho- physical character we demonstrate for these underlying cognitive representations, suggests that children may learn “more” as a domain-neutral comparative term.

Read More about Young Children's Understanding of "more" and Discrimination of Number and Surface Area

Input and Intake in Language Acquisition

Acquiring a grammar involves representing the environment and making statistical inferences within a space of linguistic hypotheses. Annie illustrates with experimental, computational and corpus studies of children acquiring Tsez, Norwegian and English.

Linguistics

Non-ARHU Contributor(s): Ann C. Gagliardi
Dates:
This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from the intake encoded by the learner, and looking at how a statistical inference mechanism, coupled with a well de fined linguistic hypothesis space could lead a learn to infer the native grammar of their native language. This work draws on experimental work, corpus analyses and computational models of Tsez, Norwegian and English children acquiring word meanings, word classes and syntax to highlight the need for an appropriate encoding of the linguistic input in order to solve any given problem in language acquisition.

Derivational order in syntax: Evidence and architectural consequences

A précis of the evidence for left‐to‐right derivations in syntax, and how this relates to the nature of real‐time mechanisms for building linguistic structure.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Shevaun Lewis
Dates:
Publisher: Elsevier
Standard
 generative
 grammars
 describe
 language
 in
 terms
 that
 appear
 distant
 from
 considerations
 of
 everyday,
 real‐time
 language
 processes.
 To
 some
 this
 is
 a
 critical
 flaw,
 while
 to
 others 
this 
is 
a 
clear 
virtue.
 One
 type 
of 
generative 
grammar 
defines 
a 
well‐formed
 sentence
 as 
a 
static, 
structured
 representation 
that 
simultaneously
 satisfies 
all 
relevant 
constraints 
of 
the
 language,
 with 
no 
regard 
to 
how
 the 
representation 
is 
assembled 
(e.g., 
Sag,
 Wasow, 
& 
Bender,
 2003).
 Another
 type
 of
 generative
 grammar
 defines
 a
 well‐formed
 sentence
 as
 a
 derivation,
 or
 sequence
 of
 representations,
 that
 describes
 how
 the
 sentence
 is
 gradually
 assembled,
 often
 including 
various 
transformations
 that
 move
 words
 or 
phrases 
from
 one 
position
 to 
another 
in 
a
 structure.
 In
 the
 most
 popular
 current
 version
 of
 the
 derivational
 approach,
 derivations
 proceed
 ‘upwards’,
 starting
 from
 the
 most
 deeply
 embedded
 terminal
 elements
 in
 the
 sentence,
 which
 are
 often
 towards
 the
 right
 of
 a
 sentence
 (e.g.,
 Chomsky,
 1995;
 Carnie,
 2006).
 Such
 derivations
 tend
 to
 proceed
 in
 a
 right‐to‐left
 order,
 which
 is
 probably
 the
 opposite
 of
 the
 order 
in 
which 
sentences 
are 
assembled
 in 
everyday 
tasks 
such
 as 
speaking 
and
 understanding.
 Since
 these
 theories
 make
 no
 claim
 to
 being
 accounts
 of
 such
 everyday
 processes,
 the
 discrepancy
 causes
 little
 concern
 among
 the
 theories'
 creators.
 Generative
 grammars
 are
 typically
 framed
 as
 theories
 of
 speakers’
 task‐independent
 knowledge
 of
 their
 language,
 and
 these
 are
 understood
 to
 be
 distinct
 from
 theories
 of
 how
 specific
 communicative
 tasks
 might
 put
 that 
knowledge
 to 
use.
 

Set
 against
 this
 background
 are
 a
 number
 of
 recent
 proposals
 that
 various
 linguistic
 phenomena
 can
 be
 better
 understood
 in
 terms
 of
 derivations
 that
 incrementally
 assemble
 structures
 in
 a
 (roughly)
 left‐to‐right
 order.
 One
 can
 evaluate
 these
 proposals
 based
 simply
 on
 how
 well
 they
 capture
 the
 acceptability
 judgments
 that
 they
 aim
 to
 explain,
 i.e.,
 standard
 conditions
 of
 'descriptive
 adequacy'.
 But
 it
 is
 hard
 to
 avoid
 the
 question
 of
 whether
 it
 is
 mere
 coincidence
 that
 left‐to‐right
 derivations
 track
 the
 order
 in
 which
 sentences
 are
 spoken
 and
 understood.
 It
 is
 also
 natural
 to
 ask
 how
 left‐to‐right
 derivations
 impact
 the
 psychological
 commitments
 of
 grammatical
 theories.
 Are
 they
 procedural
 descriptions
 of
 how
 speakers
 put
 together 
sentences 
in 
real 
time 
(either 
in 
comprehension 
or 
in 
production)? 
 Do 
they 
amount 
to
 a
 retreat
 from
 linguists’
 traditional
 agnosticism
 about
 ‘performance
 mechanisms’?
 These
 are
 questions
 about 
what 
a 
grammatical 
theory 
is 
a 
theory 
of, 
and
 they 
are 
the 
proverbial 
elephant
 in
 the
 room
 in
 discussions
 of
 left‐to‐right
 derivations
 in
 syntax,
 although
 the
 issues
 have
 not
 been
 explored 
in 
much 
detail. 
Here 
we 
summarize 
the
 current 
state 
of 
some
 of
 the 
evidence 
for
 left‐to‐right
derivations 
in 
syntax, 
and 
how 
this 
relates 
to 
a 
number 
of 
findings 
by 
our 
group 
and
 others
 on 
the 
nature 
of 
real‐time
 structure 
building 
mechanisms. 
Some
 of 
these 
questions 
have
 been
 aired
 in
 previous
 work
 (e.g.,
 Phillips
 1996,
 2004),
 but
 we
 have
 come
 to
 believe
 that
 the
 slogan
 from
 that
 earlier
 work
 (“the
 parser
 is
 the
 grammar”)
 is
 misleading
 in
 a
 number
 of
 respects, 
and
 we 
offer 
an 
updated
 position 
here.


Without Specifiers: Phrase Structure and Events

Terje Lohndal argues both that verbs have no arguments, and that there is no distinction between complements and specifiers.

Linguistics

Non-ARHU Contributor(s): Terje Lohndal
Dates:
This dissertation attempts to unify two reductionist hypotheses: that there is no relational difference between specifiers and complements, and that verbs do not have thematic arguments. I argue that these two hypotheses actually bear on each other and that we get a better theory if we pursue both of them. The thesis is centered around the following hypothesis: Each application of Spell-Out corresponds to a conjunct at logical form. In order to create such a system, it is necessary to provide a syntax that is designed such that each Spell-Out domain is mapped into a conjunct. This is done by eliminating the relational difference between specifiers and complements. The conjuncts are then conjoined into Neo-Davidsonian representations that constitute logical forms. The theory is argued to provide a transparent mapping from syntactic structures to logical forms, such that the syntax gives you a logical form where the verb does not have any thematic arguments. In essence, the thesis is therefore an investigation into the structure of verbs. This theory of Spell-Out raises a number of questions and it makes strong predictions about the structure of possible derivations. The thesis discusses a number of these: the nature of linearization and movement, left-branch extractions, serial verb constructions, among others. It is shown how the present theory can capture these phenomena, and sometimes in better ways than previous analyses. The thesis closes by discussing some more foundational issues related to transparency, the syntax-semantics interface, and the nature of basic semantic composition operations.

A test of the relation between working-memory capacity and syntactic island effects

Syntactic island effects are more likely to be due to grammatical constraints or grounded grammaticized constraints than to limited processing resources.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Jon Sprouse, Matt Wagers
Dates:
The source of syntactic island effects has been a topic of considerable debate within linguistics and psycholinguistics. Explanations fall into three basic categories: grammatical theories, which posit specific grammatical constraints that exclude extraction from islands; grounded theories, which posit grammaticized constraints that have arisen to adapt to constraints on learning or parsing; and reductionist theories, which analyze island effects as emergent consequences of non-grammatical constraints on the sentence parser, such as limited processing resources. In this article we present two studies designed to test a fundamental prediction of one of the most prominent reductionist theories: that the strength of island effects should vary across speakers as a function of individual differences in processing resources. We tested over three hundred native speakers of English on four different island-effect types (whether, complex NP, subject, and adjunct islands) using two different acceptability rating tasks (seven-point scale and magnitude estimation) and two different measures of working-memory capacity (serial recall and n-back). We find no evidence of a relationship between working-memory capacity and island effects using a variety of statistical analysis techniques, including resampling simulations. These results suggest that island effects are more likely to be due to grammatical constraints or grounded grammaticized constraints than to limited processing resources.

A Dilemma with Accounts of Right-node Raising

No current analysis of Right Node Raising is correct.

Linguistics

Non-ARHU Contributor(s): Bradley Larson
Dates:
There is a dilemma in current studies of right-node raising (RNR): The main approaches to the construction make fundamentally contradictory predictions that account for overlapping sets of data points. In this paper I argue that no single current analysis can account for the range of data and argue against the possibility that the analyses work in concert to account for the data. That is, given that current analyses each account for some but not the entirety of the documented data, there are two logical possibilities: 1) None of the analyses are correct. 2) More than one analysis is correct in its limited purview and duties are shared such that all the data is accounted for. I argue for the former. Under the second option introduced above, RNR is derived either by means of one particular operation or a different one. That is, the term “right-node raising” is better seen as a surface-level description for a family of derivations: some stemming from an application of the first operation, the others via the second (as argued by Barros and Vicente (2010)). If this were the case it would be a sharp departure from the assumptions of most work in RNR and require critical investigation. When investigated further, there turns out to be no motivation to analyze RNR as being derived in two entirely separate ways. This being the case, the RNR dilemma remains.

Head Movement in the Bangla DP

A new analysis of the DP in Bangla, with special attention to its numeral classifiers.

Linguistics

Non-ARHU Contributor(s): Dustin Chacón
Dates:
Bengali/Bangla is unusual among South Asian languages in that it uses numerical classifiers. In this paper, I propose a new analysis of the DP structure in Bangla motivated by data previously unaccounted for and typological concerns. Specifically, I propose that Bangla has DP-internal NP movement to Spec,DP to mark definiteness, that the numeral and classifier form separate heads in the syntax, and that there is noun to classifier movement when there is no overt classifier. I propose a feature for each of these phenomena, and attempt to explain the ungrammatical examples using principled reasons de- rived from this structure. Also, I give an analysis for the quantificationally approximate construction, in which the classifier appears on the left of the numeral. I claim that the model presented in this paper can account for these constructions, and that the differences found between “classifier-compatible” nouns and “classifier-less” nouns with regard to the quantificationally approximate structures follows naturally from my analysis.