William Idsardi

Professor, Linguistics
Member, Maryland Language Science Center
Program in Neuroscience and Cognitive Science
CLaME: Max Planck • NYU Center for Language Music and Emotion
(301) 405-8376idsardi@umd.edu
1401 A Marie Mount Hall
Get Directions
Research Expertise
Neurolinguistics
Phonology
Psycholinguistics
Publications
Underspecification in time
Abstracting away from linear order in phonology.
Contributor(s): William IdsardiSubstance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.
Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”
Auditory pattern recognition in nonhuman animals shares important characteristics with human phonology, but not human syntax.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Jonathan Rawsi (Stony Brook), Jeffrey Heinz (Stony Brook)
We comment on the technical interpretation of the study of Watson et al. and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals’ ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.
Read More about Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”
Social inference may guide early lexical learning
Assessment of knowledgeability and group membership influences infant word learning.
Contributor(s): Naomi Feldman, William IdsardiNon-ARHU Contributor(s): Alayo Tripp *19
We incorporate social reasoning about groups of informants into a model of word learning, and show that the model accounts for infant looking behavior in tasks of both word learning and recognition. Simulation 1 models an experiment where 16-month-old infants saw familiar objects labeled either correctly or incorrectly, by either adults or audio talkers. Simulation 2 reinterprets puzzling data from the Switch task, an audiovisual habituation procedure wherein infants are tested on familiarized associations between novel objects and labels. Eight-month-olds outperform 14-month-olds on the Switch task when required to distinguish labels that are minimal pairs (e.g., “buk” and “puk”), but 14-month-olds' performance is improved by habituation stimuli featuring multiple talkers. Our modeling results support the hypothesis that beliefs about knowledgeability and group membership guide infant looking behavior in both tasks. These results show that social and linguistic development interact in non-trivial ways, and that social categorization findings in developmental psychology could have substantial implications for understanding linguistic development in realistic settings where talkers vary according to observable features correlated with social groupings, including linguistic, ethnic, and gendered groups.
Read More about Social inference may guide early lexical learning
Computational phonology today
Bill Idsardi and Jeff Heinz highlight important aspects of today's computational phonology.
Contributor(s): William IdsardiPhonemes: Lexical access and beyond
A defense of the central role of phonemes in phonology, contrary to the current mainstream.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Nina Kazanina, Jeffrey S. Bowers
Categorical effects in fricative perception are reflected in cortical source information
Phonetic discrimination is affected by phonological category more for consonants than it is for vowels. But what about fricatives in particular? Sol Lago and collaborators provide evidence from ERF and MEG.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Sol Lago, Mathias Scharinger, Yakov Kronrod
What Complexity Differences Reveal About Domains in Language
Do humans learn phonology differently than they do syntax? Yes, argue Bill Idsardi and Jeff Heinz, as this is the best explanation for why phonological but not syntactic patterns all belong to the regular region of the Chomsky Hierarchy.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Jeffrey Heinz
Read More about What Complexity Differences Reveal About Domains in Language
A single stage approach to learning phonological categories: Insights from Inuktitut
Much research presumes that we acquire phonetic categories before abstracting phonological categories. Ewan Dunbar argues that this two-step progression is unnecessary, with a Bayesian model for the acquisition of Inuktitut vowels.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Brian W Dillion, Ewan Dunbar,
Sentence and Word Complexity
Do we learn different kinds of linguistic structure differently?
Contributor(s): William IdsardiNon-ARHU Contributor(s): Jeffrey Heinz
A Comprehensive Three-dimensional Cortical Map of Vowel Space
Postdoc Mathias Scharinger and collaborators use the magnetic N1 (M100) to map the entire vowel space of Turkish onto cortical locations in the brain. They find two distinct tonotopic maps, one for front vowels and one for back.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Mathias Scharinger, Samantha Poe
You had me at "Hello": Rapid extraction of dialect information from spoken words
MEG studies show that we detect acoustic features of dialect speaker-independently, pre-attentively and categorically, within 100 milliseconds.
Contributor(s): William IdsardiNon-ARHU Contributor(s): Mathias Scharinger, Philip Monahan