Skip to main content
Skip to main content

William Idsardi

Photo of Bill Idsardi

Research Expertise

Neurolinguistics
Phonology
Psycholinguistics

Publications

Underspecification in time

Abstracting away from linear order in phonology.

Linguistics

Contributor(s): William Idsardi
Dates:

Substance-free phonology or SFP (Reiss 2017) has renewed interest in the question of abstraction in phonology. Perhaps the most common form of abstraction through the absence of substance is underspecification, where some aspects of speech lack representation in memorized representations, within the phonology or in the phonetic implementation (Archangeli 1988, Keating 1988, Lahiri and Reetz 2010 among many others). The fundamental basis for phonology is argued to be a mental model of speech events in time, following Raimy (2000) and Papillon (2020). Each event can have properties (one-place predicates that are true of the event), which include the usual phonological features, and also structural entities for extended events like moras and syllables. Features can be bound together in an event, yielding segment-like properties. Pairs of events can be ordered in time by the temporal logic precedence relation represented by ‘<’. Events, features and precedence form a directed multigraph structure with edges in the graph interpreted as “maybe next”. Some infant bimodal speech perception results are examined using this framework, arguing for underspecification in time in the developing phonological representations.

Read More about Underspecification in time

Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

Auditory pattern recognition in nonhuman animals shares important characteristics with human phonology, but not human syntax.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Jonathan Rawsi (Stony Brook), Jeffrey Heinz (Stony Brook)
Dates:

We comment on the technical interpretation of the study of Watson et al. and caution against their conclusion that the behavioral evidence in their experiments points to nonhuman animals’ ability to learn syntactic dependencies, because their results are also consistent with the learning of phonological dependencies in human languages.

Read More about Comment on “Nonadjacent dependency processing in monkeys, apes, and humans”

Social inference may guide early lexical learning

Assessment of knowledgeability and group membership influences infant word learning.

Linguistics

Contributor(s): Naomi Feldman, William Idsardi
Non-ARHU Contributor(s): Alayo Tripp *19
Dates:

We incorporate social reasoning about groups of informants into a model of word learning, and show that the model accounts for infant looking behavior in tasks of both word learning and recognition. Simulation 1 models an experiment where 16-month-old infants saw familiar objects labeled either correctly or incorrectly, by either adults or audio talkers. Simulation 2 reinterprets puzzling data from the Switch task, an audiovisual habituation procedure wherein infants are tested on familiarized associations between novel objects and labels. Eight-month-olds outperform 14-month-olds on the Switch task when required to distinguish labels that are minimal pairs (e.g., “buk” and “puk”), but 14-month-olds' performance is improved by habituation stimuli featuring multiple talkers. Our modeling results support the hypothesis that beliefs about knowledgeability and group membership guide infant looking behavior in both tasks. These results show that social and linguistic development interact in non-trivial ways, and that social categorization findings in developmental psychology could have substantial implications for understanding linguistic development in realistic settings where talkers vary according to observable features correlated with social groupings, including linguistic, ethnic, and gendered groups.

Read More about Social inference may guide early lexical learning

Computational phonology today

Bill Idsardi and Jeff Heinz highlight important aspects of today's computational phonology.

Linguistics

Contributor(s): William Idsardi
Dates:
Broadly speaking, computational phonology encompasses a variety of techniques and goals (see Daland 2014 for a survey). In this introduction we would like to highlight three aspects of current work in computationalphonology: data science and model comparison, modelling phonological phenomena using computational simulations, and characterising the computational nature of phonological patterning with theorems and proofs. Papers in this thematic issue illustrate all three of these trends, and sometimes more than one of them. The way we group them in this introduction is meant to highlight the similarities between them, and not to diminish the importance of their other contributions. As we discuss these areas, we also highlight important conceptual issues that we believe are often overlooked.

Read More about Computational phonology today

Phonemes: Lexical access and beyond

A defense of the central role of phonemes in phonology, contrary to the current mainstream.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Nina Kazanina, Jeffrey S. Bowers
Dates:
Phonemes play a central role in traditional theories as units of speech perception and access codes to lexical representations. Phonemes have two essential properties: they are ‘segment-sized’ (the size of a consonant or vowel) and abstract (a single phoneme may be have different acoustic realisations). Nevertheless, there is a long history of challenging the phoneme hypothesis, with some theorists arguing for differently sized phonological units (e.g. features or syllables) and others rejecting abstract codes in favour of representations that encode detailed acoustic properties of the stimulus. The phoneme hypothesis is the minority view today. We defend the phoneme hypothesis in two complementary ways. First, we show that rejection of phonemes is based on a flawed interpretation of empirical findings. For example, it is commonly argued that the failure to find acoustic invariances for phonemes rules out phonemes. However, the lack of invariance is only a problem on the assumption that speech perception is a bottom-up process. If learned sublexical codes are modified by top-down constraints (which they are), then this argument loses all force. Second, we provide strong positive evidence for phonemes on the basis of linguistic data. Almost all findings that are taken (incorrectly) as evidence against phonemes are based on psycholinguistic studies of single words. However, phonemes were first introduced in linguistics, and the best evidence for phonemes comes from linguistic analyses of complex word forms and sentences. In short, the rejection of phonemes is based on a false analysis and a too-narrow consideration of the relevant data.

Read More about Phonemes: Lexical access and beyond

Categorical effects in fricative perception are reflected in cortical source information

Phonetic discrimination is affected by phonological category more for consonants than it is for vowels. But what about fricatives in particular? Sol Lago and collaborators provide evidence from ERF and MEG.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Sol Lago, Mathias Scharinger, Yakov Kronrod
Dates:
Previous research in speech perception has shown that category information affects the discrimination of consonants to a greater extent than vowels. However, there has been little electrophysiological work on the perception of fricative sounds, which are informative for this contrast as they share properties with both consonants and vowels. In the current study we address the relative contribution of phonological and acoustic information to the perception of sibilant fricatives using event-related fields (ERFs) and dipole modeling with magnetoencephalography (MEG). We show that the field strength of neural responses peaking approximately 200ms after sound onset co-varies with acoustic factors, while the cortical localization of earlier M100 responses suggests a stronger influence of phonological categories. We propose that neural equivalents of categorical perception for fricative sounds are best seen using localization measures, and that spectral cues are spatially coded in human cortex.

Read More about Categorical effects in fricative perception are reflected in cortical source information

What Complexity Differences Reveal About Domains in Language

Do humans learn phonology differently than they do syntax? Yes, argue Bill Idsardi and Jeff Heinz, as this is the best explanation for why phonological but not syntactic patterns all belong to the regular region of the Chomsky Hierarchy.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Jeffrey Heinz
Dates:
An important distinction between phonology and syntax has been overlooked. All phonological patterns belong to the regular region of the Chomsky Hierarchy, but not all syntactic patterns do. We argue that the hypothesis that humans employ distinct learning mechanisms for phonology and syntax currently offers the best explanation for this difference.

Read More about What Complexity Differences Reveal About Domains in Language

A single stage approach to learning phonological categories: Insights from Inuktitut

Much research presumes that we acquire phonetic categories before abstracting phonological categories. Ewan Dunbar argues that this two-step progression is unnecessary, with a Bayesian model for the acquisition of Inuktitut vowels.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Brian W Dillion, Ewan Dunbar,
Dates:
We argue that there is an implicit view in psycholinguistics that phonological acquisition is a 'two-stage' process: phonetic categories are first acquired, and then subsequently mapped onto abstract phoneme categories. We present simulations that suggest two problems with this view: first, the learner might mistake the phoneme-level categories for phonetic-level categories and thus be unable to learn the relationships between phonetic-level categories; on the other hand, the learner might construct inaccurate phonetic-level representations that prevent it from finding regular relations among them. We suggest an alternative conception of the phonological acquisition problem that sidesteps this apparent inevitability, and present a Bayesian model that acquires phonemic categories in a single stage. Using acoustic data from Inuktitut, we show that this model reliably converges on a set of phoneme-level categories and phonetic-level relations among subcategories, without making use of a lexicon.

Sentence and Word Complexity

Do we learn different kinds of linguistic structure differently?

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Jeffrey Heinz
Dates:
Our understanding of human learning is increasingly informed by findings from multiple fields—psychology, neuroscience, computer science, linguistics, and education. A convergence of insights is forging a “new science of learning” within cognitive science, which promises to play a key role in developing intelligent machines (1, 2). A long-standing fundamental issue in theories of human learning is whether there are specialized learning mechanisms for certain tasks or spheres of activity (domains). For example, is learning how to open a door (turning the handle before pulling) the same kind of “learning” as putting up and taking down scaffolding (where disassembly must be done in the reverse order of assembly)? Surprisingly, this issue plays out within the domain of human language.

A Comprehensive Three-dimensional Cortical Map of Vowel Space

Postdoc Mathias Scharinger and collaborators use the magnetic N1 (M100) to map the entire vowel space of Turkish onto cortical locations in the brain. They find two distinct tonotopic maps, one for front vowels and one for back.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Mathias Scharinger, Samantha Poe
Dates:
Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral–medial, anterior–posterior, and inferior–superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom–up information but crucially involves featural–phonetic top–down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

You had me at "Hello": Rapid extraction of dialect information from spoken words

MEG studies show that we detect acoustic features of dialect speaker-independently, pre-attentively and categorically, within 100 milliseconds.

Linguistics

Contributor(s): William Idsardi
Non-ARHU Contributor(s): Mathias Scharinger, Philip Monahan
Dates:
Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of ‘Hello’ in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of ‘Hello’. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception.