Modeling the learning of the Person Case Constraint
Adam Liter and Naomi Feldman show that the Person Case Constraint can be learned on the basis of significantly less data, if the constraint is represented in terms of feature bundles.
Many domains of linguistic research posit feature bundles as an explanation for various phenomena. Such hypotheses are often evaluated on their simplicity (or parsimony). We take a complementary approach. Specifically, we evaluate different hypotheses about the representation of person features in syntax on the basis of their implications for learning the Person Case Constraint (PCC). The PCC refers to a phenomenon where certain combinations of clitics (pronominal bound morphemes) are disallowed with ditransitive verbs. We compare a simple theory of the PCC, where person features are represented as atomic units, to a feature-based theory of the PCC, where person features are represented as feature bundles. We use Bayesian modeling to compare these theories, using data based on realistic proportions of clitic combinations from child-directed speech. We find that both theories can learn the target grammar given enough data, but that the feature-based theory requires significantly less data, suggesting that developmental trajectories could provide insight into syntactic representations in this domain.
A unified account of categorical effects in phonetic perception
A statistical model that explains both the strong categorical effects in perception of consonants, and the very weak effects in perception of vowels.
Infant-directed speech is consistent with teaching
Why do we speak differently to infants than to adults? To help answer this question, Naomi Feldman offers a formal theory of phonetic teaching and learning.
Why discourse affects speakers' choice of referring expressions
A probalistic model of the choice between using a pronoun or some other referring expression.
A role for the developing lexicon in phonetic category acquisition
Bayesian models and artificial language learning tasks show that infant acquiosition of phonetic categories can be helpfully constrained by feedback from word segmentation.
Word-level information influences phonetic learning in adults and infants
How do infants learn the phonetic categories of their language? The words they occur can provide a useful cue, shows Naomi Feldman.
The influence of categories on perception: Explaining the perceptual magnet effect as optimal statistical inference
Naomi Feldman develops a Bayesian account of the perceptual magnet effect.
A variety of studies have demonstrated that organizing stimuli into categories can affect the way the stimuli are perceived. We explore the influence of categories on perception through one such phenomenon, the perceptual magnet effect, in which discriminability between vowels is reduced near prototypical vowel sounds. We present a Bayesian model to explain why this reduced discriminability might occur: It arises as a consequence of optimally solving the statistical problem of perception in noise. In the optimal solution to this problem, listeners’ perception is biased toward phonetic category means because they use knowledge of these categories to guide their inferences about speakers’ target productions. Simulations show that model predictions closely correspond to previously published human data, and novel experimental results provide evidence for the predicted link between perceptual warping and noise. The model unifies several previous accounts of the perceptual magnet effect and provides a framework for exploring categorical effects in other domains.