Skip to main content
Skip to main content

Research

Research at our top-ranked department spans syntax, semantics, phonology, language acquisition, computational linguistics, psycholinguistics and neurolinguistics. 

Connections between our core competencies are strong, with theoretical, experimental and computational work typically pursued in tandem.

A network of collaboration at all levels sustains a research climate that is both vigorous and friendly. Here new ideas develop in conversation, stimulated by the steady activity of our labs and research groups, frequent student meetings with faculty, regular talks by local and invited scholars and collaborations with the broader University of Maryland language science community, the largest and most integrated language science research community in North America.

Show activities matching...

filter by...

Derivational order in syntax: Evidence and architectural consequences

A précis of the evidence for left‐to‐right derivations in syntax, and how this relates to the nature of real‐time mechanisms for building linguistic structure.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Shevaun Lewis
Dates:
Publisher: Elsevier
Standard
 generative
 grammars
 describe
 language
 in
 terms
 that
 appear
 distant
 from
 considerations
 of
 everyday,
 real‐time
 language
 processes.
 To
 some
 this
 is
 a
 critical
 flaw,
 while
 to
 others 
this 
is 
a 
clear 
virtue.
 One
 type 
of 
generative 
grammar 
defines 
a 
well‐formed
 sentence
 as 
a 
static, 
structured
 representation 
that 
simultaneously
 satisfies 
all 
relevant 
constraints 
of 
the
 language,
 with 
no 
regard 
to 
how
 the 
representation 
is 
assembled 
(e.g., 
Sag,
 Wasow, 
& 
Bender,
 2003).
 Another
 type
 of
 generative
 grammar
 defines
 a
 well‐formed
 sentence
 as
 a
 derivation,
 or
 sequence
 of
 representations,
 that
 describes
 how
 the
 sentence
 is
 gradually
 assembled,
 often
 including 
various 
transformations
 that
 move
 words
 or 
phrases 
from
 one 
position
 to 
another 
in 
a
 structure.
 In
 the
 most
 popular
 current
 version
 of
 the
 derivational
 approach,
 derivations
 proceed
 ‘upwards’,
 starting
 from
 the
 most
 deeply
 embedded
 terminal
 elements
 in
 the
 sentence,
 which
 are
 often
 towards
 the
 right
 of
 a
 sentence
 (e.g.,
 Chomsky,
 1995;
 Carnie,
 2006).
 Such
 derivations
 tend
 to
 proceed
 in
 a
 right‐to‐left
 order,
 which
 is
 probably
 the
 opposite
 of
 the
 order 
in 
which 
sentences 
are 
assembled
 in 
everyday 
tasks 
such
 as 
speaking 
and
 understanding.
 Since
 these
 theories
 make
 no
 claim
 to
 being
 accounts
 of
 such
 everyday
 processes,
 the
 discrepancy
 causes
 little
 concern
 among
 the
 theories'
 creators.
 Generative
 grammars
 are
 typically
 framed
 as
 theories
 of
 speakers’
 task‐independent
 knowledge
 of
 their
 language,
 and
 these
 are
 understood
 to
 be
 distinct
 from
 theories
 of
 how
 specific
 communicative
 tasks
 might
 put
 that 
knowledge
 to 
use.
 

Set
 against
 this
 background
 are
 a
 number
 of
 recent
 proposals
 that
 various
 linguistic
 phenomena
 can
 be
 better
 understood
 in
 terms
 of
 derivations
 that
 incrementally
 assemble
 structures
 in
 a
 (roughly)
 left‐to‐right
 order.
 One
 can
 evaluate
 these
 proposals
 based
 simply
 on
 how
 well
 they
 capture
 the
 acceptability
 judgments
 that
 they
 aim
 to
 explain,
 i.e.,
 standard
 conditions
 of
 'descriptive
 adequacy'.
 But
 it
 is
 hard
 to
 avoid
 the
 question
 of
 whether
 it
 is
 mere
 coincidence
 that
 left‐to‐right
 derivations
 track
 the
 order
 in
 which
 sentences
 are
 spoken
 and
 understood.
 It
 is
 also
 natural
 to
 ask
 how
 left‐to‐right
 derivations
 impact
 the
 psychological
 commitments
 of
 grammatical
 theories.
 Are
 they
 procedural
 descriptions
 of
 how
 speakers
 put
 together 
sentences 
in 
real 
time 
(either 
in 
comprehension 
or 
in 
production)? 
 Do 
they 
amount 
to
 a
 retreat
 from
 linguists’
 traditional
 agnosticism
 about
 ‘performance
 mechanisms’?
 These
 are
 questions
 about 
what 
a 
grammatical 
theory 
is 
a 
theory 
of, 
and
 they 
are 
the 
proverbial 
elephant
 in
 the
 room
 in
 discussions
 of
 left‐to‐right
 derivations
 in
 syntax,
 although
 the
 issues
 have
 not
 been
 explored 
in 
much 
detail. 
Here 
we 
summarize 
the
 current 
state 
of 
some
 of
 the 
evidence 
for
 left‐to‐right
derivations 
in 
syntax, 
and 
how 
this 
relates 
to 
a 
number 
of 
findings 
by 
our 
group 
and
 others
 on 
the 
nature 
of 
real‐time
 structure 
building 
mechanisms. 
Some
 of 
these 
questions 
have
 been
 aired
 in
 previous
 work
 (e.g.,
 Phillips
 1996,
 2004),
 but
 we
 have
 come
 to
 believe
 that
 the
 slogan
 from
 that
 earlier
 work
 (“the
 parser
 is
 the
 grammar”)
 is
 misleading
 in
 a
 number
 of
 respects, 
and
 we 
offer 
an 
updated
 position 
here.


Input and Intake in Language Acquisition

Acquiring a grammar involves representing the environment and making statistical inferences within a space of linguistic hypotheses. Annie illustrates with experimental, computational and corpus studies of children acquiring Tsez, Norwegian and English.

Linguistics

Non-ARHU Contributor(s): Ann C. Gagliardi
Dates:
This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from the intake encoded by the learner, and looking at how a statistical inference mechanism, coupled with a well de fined linguistic hypothesis space could lead a learn to infer the native grammar of their native language. This work draws on experimental work, corpus analyses and computational models of Tsez, Norwegian and English children acquiring word meanings, word classes and syntax to highlight the need for an appropriate encoding of the linguistic input in order to solve any given problem in language acquisition.

Without Specifiers: Phrase Structure and Events

Terje Lohndal argues both that verbs have no arguments, and that there is no distinction between complements and specifiers.

Linguistics

Non-ARHU Contributor(s): Terje Lohndal
Dates:
This dissertation attempts to unify two reductionist hypotheses: that there is no relational difference between specifiers and complements, and that verbs do not have thematic arguments. I argue that these two hypotheses actually bear on each other and that we get a better theory if we pursue both of them. The thesis is centered around the following hypothesis: Each application of Spell-Out corresponds to a conjunct at logical form. In order to create such a system, it is necessary to provide a syntax that is designed such that each Spell-Out domain is mapped into a conjunct. This is done by eliminating the relational difference between specifiers and complements. The conjuncts are then conjoined into Neo-Davidsonian representations that constitute logical forms. The theory is argued to provide a transparent mapping from syntactic structures to logical forms, such that the syntax gives you a logical form where the verb does not have any thematic arguments. In essence, the thesis is therefore an investigation into the structure of verbs. This theory of Spell-Out raises a number of questions and it makes strong predictions about the structure of possible derivations. The thesis discusses a number of these: the nature of linearization and movement, left-branch extractions, serial verb constructions, among others. It is shown how the present theory can capture these phenomena, and sometimes in better ways than previous analyses. The thesis closes by discussing some more foundational issues related to transparency, the syntax-semantics interface, and the nature of basic semantic composition operations.

A test of the relation between working-memory capacity and syntactic island effects

Syntactic island effects are more likely to be due to grammatical constraints or grounded grammaticized constraints than to limited processing resources.

Linguistics

Contributor(s): Colin Phillips
Non-ARHU Contributor(s): Jon Sprouse, Matt Wagers
Dates:
The source of syntactic island effects has been a topic of considerable debate within linguistics and psycholinguistics. Explanations fall into three basic categories: grammatical theories, which posit specific grammatical constraints that exclude extraction from islands; grounded theories, which posit grammaticized constraints that have arisen to adapt to constraints on learning or parsing; and reductionist theories, which analyze island effects as emergent consequences of non-grammatical constraints on the sentence parser, such as limited processing resources. In this article we present two studies designed to test a fundamental prediction of one of the most prominent reductionist theories: that the strength of island effects should vary across speakers as a function of individual differences in processing resources. We tested over three hundred native speakers of English on four different island-effect types (whether, complex NP, subject, and adjunct islands) using two different acceptability rating tasks (seven-point scale and magnitude estimation) and two different measures of working-memory capacity (serial recall and n-back). We find no evidence of a relationship between working-memory capacity and island effects using a variety of statistical analysis techniques, including resampling simulations. These results suggest that island effects are more likely to be due to grammatical constraints or grounded grammaticized constraints than to limited processing resources.

A Dilemma with Accounts of Right-node Raising

No current analysis of Right Node Raising is correct.

Linguistics

Non-ARHU Contributor(s): Bradley Larson
Dates:
There is a dilemma in current studies of right-node raising (RNR): The main approaches to the construction make fundamentally contradictory predictions that account for overlapping sets of data points. In this paper I argue that no single current analysis can account for the range of data and argue against the possibility that the analyses work in concert to account for the data. That is, given that current analyses each account for some but not the entirety of the documented data, there are two logical possibilities: 1) None of the analyses are correct. 2) More than one analysis is correct in its limited purview and duties are shared such that all the data is accounted for. I argue for the former. Under the second option introduced above, RNR is derived either by means of one particular operation or a different one. That is, the term “right-node raising” is better seen as a surface-level description for a family of derivations: some stemming from an application of the first operation, the others via the second (as argued by Barros and Vicente (2010)). If this were the case it would be a sharp departure from the assumptions of most work in RNR and require critical investigation. When investigated further, there turns out to be no motivation to analyze RNR as being derived in two entirely separate ways. This being the case, the RNR dilemma remains.

Head Movement in the Bangla DP

A new analysis of the DP in Bangla, with special attention to its numeral classifiers.

Linguistics

Non-ARHU Contributor(s): Dustin Chacón
Dates:
Bengali/Bangla is unusual among South Asian languages in that it uses numerical classifiers. In this paper, I propose a new analysis of the DP structure in Bangla motivated by data previously unaccounted for and typological concerns. Specifically, I propose that Bangla has DP-internal NP movement to Spec,DP to mark definiteness, that the numeral and classifier form separate heads in the syntax, and that there is noun to classifier movement when there is no overt classifier. I propose a feature for each of these phenomena, and attempt to explain the ungrammatical examples using principled reasons de- rived from this structure. Also, I give an analysis for the quantificationally approximate construction, in which the classifier appears on the left of the numeral. I claim that the model presented in this paper can account for these constructions, and that the differences found between “classifier-compatible” nouns and “classifier-less” nouns with regard to the quantificationally approximate structures follows naturally from my analysis.

Height-Relative Determination of (Non-Root) Modal Flavor: Evidence from Hindi

The Hindi future marker "gaa" has a variety of interpretations. Dave Kush shows how these correspond to different syntactic contexts.

Linguistics

Non-ARHU Contributor(s): Dave Kush
Dates:
In this paper I pursue the idea that a modal's flavor is determined by its attachment height. The various interpretations of the Hindi future marker gaa, which is taken to be a modal, are discussed. The idea put forth is that modal flavor is indirectly constrained by the semantic type of the modal’s prejacent instead of being solely determined via contextual assignment. Modal Bases are re-envisioned as being comprised of different types of alternatives (worlds, world-time pairs, etc.), rather than just sets of worlds determined by different accessibility relations. The correlation between height and attachment site falls out as a consequence of the different types of alternatives Modal Bases make available for semantic computation.

Seeing what you mean, mostly

How is perception of numerosity related to the meaning of words like "most"? Paul Pietroski, Jeff Lidz and collaborators explore the question experimentally.

Linguistics

Non-ARHU Contributor(s): Tim Hunter, Darko Odic, Justin Halberda
Dates:
Publisher: Emerald
Idealizing, a speaker endorses or rejects a (declarative) sentence S in a situation s based on how she understands S and represents s. But relatively little is known about how speakers represent situations. Linguists can construct and test initial models of semantic competence, by supposing that sentences have representation-neutral truth conditions, which speakers represent somehow; cf. Marr's (1982) Level One description of a function computed, as opposed to a Level Two description of an algorithm that computes outputs given inputs. But this leaves interesting questions unsettled. One would like to find cases in which S can be held fixed, while modifying s in ways that have predictable effects on the nonlinguistic cognitive systems recruited to evaluate S. Extant work in perceptual psychology offers opportunities for eliciting judgments from speakers in highly controlled settings where something is known about the cognitive systems that speakers recruit when endorsing or rejecting a target sentence. In such settings, behavioral data can reveal aspects of how the human language system interfaces with other systems of cognition that are presumably shared with other species. As an illustration, we focus on the quantificational word “most” and how perception of numerosity is related to the meaning of “Most of the dots are blue,” in the hope that studies of other perceptual systems may provide analogous opportunities for investigating how words are related to prelinguistic representations.

The adicities of thematic separation

A syntax suited to a neo-Davidsonian semantics, where each dependent is interpreted as a conjunct.

Linguistics

Non-ARHU Contributor(s): Terje Lohndal
Dates:
This paper discusses whether or not verbs have thematic arguments or whether they just have an event variable. The paper discusses some evidence in favor of the Neo-Davidsonian position that verbs only have an event variable. Based on this evidence, the paper develops a transparent mapping hypothesis from syntax to logical form where each Spell-Out domain corresponds to a conjunct at logical form. The paper closes by discussing the nature of compositionality for a Conjunctivist semantics.

Interrogatives, Instructions, and I-languages: An I-Semantics for Questions

An internalist semantics for interrogative clauses, from Terje Lohndal and Paul Pietroski.

Linguistics

Contributor(s): Paul Pietroski
Non-ARHU Contributor(s): Terje Lohndal
Dates:
It is often said that the meaning of an interrogative sentence is a set of answers. This raises questions about how the meaning of an interrogative is compositionally determined, especially if one adopts an I-language perspective. By contrast, we argue that I-languages generate semantic instructions (SEMs) for how to assemble concepts of a special sort and then prepare these concepts for various uses - e.g., in declaring, querying, or assembling concepts of still further complexity. We connect this abstract conception of meaning to a specific (minimalist) conception of complementizer phrase edges, with special attention to wh-questions and their relative clause counterparts. The proposed syntax and semantics illustrates a more general conception of edges and their relation to the so-called duality of semantics.