Skip to main content
Skip to main content

Terps virtually at BUCLD

November 01, 2021 Linguistics

Linguistics faculty Jeff Lidz sitting at a meeting and looking skeptically at the camera, with humor.

On acquiring argument structure and speech categories.

November 4-7 sees the 46th Boston University Conference on Language Development, once again hosted virtually. This year Maryland fields four presentations, involving Craig Thorburn, Yu'an Yang, Jack Ying and postdoc Zara Harmon, with 2019 alums Nick Huang and Laurel Perkins, plus HESP's Jan Edwards, and our own Naomi Feldman, Valentine Hacquard, Ellen Lau and Jeff Lidz and Alexander Williams. Titles and abstracts are below

  • Laurel Perkins, Yuanfan Ying, Alexander Williams and Jeffrey Lidz / Object wh-questions with unknown verbs are transitive for 20-month-olds (talk, Thursday 11:00-11:30am)
  • Nick Huang, Yu'an Yang, Valentine Hacquard and Jeffrey Lidz. / Learning subcategorization properties of attitude verbs in wh in-situ languages (poster, Thursday 8:00-9:30pm)
  • Crag Thorburn, Ellen Lau and Naomi Feldman / A reinforcement learning approach to speech category acquisition (poster, Saturday 11:00am-12:30pm)
  • Libby Barak, Zara Harmon, Naomi Feldman, Jan Edwards and P. Shafto / A computational analysis of language delay and intervention (talk, Saturday 8:00-8:30pm)

Abstracts

Object wh-questions with unknown verbs are transitive for 20-month-olds

Prior work finds that 18-19-month-olds represent fronted wh-phrases as arguments with known verbs. Here, we show that 20-month-olds do the same when interpreting unknown verbs. In a task adapted from the Violation-of-Expectations paradigm, infants (19;0-22;0) see dialogues with novel verbs in object wh-questions (e.g. "What is the girl gonna gorp?"), intransitive polar questions ("Is the girl gonna gorp?"), or transitive polar questions ("Is the girl gonna gorp the toy?"). At test, infants view a 2-participant event (e.g. a girl knocks over a tower) and we measure how long they look. Infants who heard wh-question dialogues attended similarly to the test events as infants who heard canonical transitive dialogues; infants who heard intransitive dialogues exhibited different looking behavior from the other two conditions. Thus, 20-montholds treat object wh-questions with a novel verb as transitive when relating them to 2-participant scenes, suggesting that they might use wh-dependency representations to feed verb learning.

Learning subcategorization properties of attitude verbs in wh in-situ languages

Part of learning a verb involves learning its subcategorization requirements, e.g. whether it selects only interrogative clausal complements, declarative complements, or both. In the context of attitude verbs, learning these requirements is a prerequisite for syntactic bootstrapping, in which learners use syntactic cues to infer the semantics of these verbs, which describe mental states that are hard to learn from physical contexts alone. However, in certain wh in-situ languages, like Mandarin, it may be difficult to distinguish an interrogative complement from a declarative complement containing a wh-phrase (that scopes over the whole sentence), since both complements are string-identical. Failure to correctly disambiguate these complements might cause complications for both syntactic bootstrapping and parsing sentences with embedded wh-phrases. With a Mandarin corpus study, we show that this learning problem is only apparent: there are other syntactic and speech act cues that can help with disambiguation.

A reinforcement learning approach to speech category acquisition

Adults struggle to learn non-native speech categories in many experimental settings (Goto, 1971), but learn efficiently in a video game paradigm where non-native speech sounds have functional significance (Lim and Holt, 2011). Behavioral and neural evidence from this and other paradigms point toward the involvement of reinforcement learning mechanisms in speech category learning (Harmon et al., 2019; Lim et al., 2019). We formalize this hypothesis computationally and confirm that our reinforcement learning model simulates adult data from Lim and Holt (2011). Moreover, we show that the same model captures infant data from conditioned headturn experiments (Kuhl, 1983), suggesting that reinforcement learning could play a key role in speech category learning in infants as well as adults.

A computational analysis of language delay and intervention

Children with Developmental Language Disorder produce bare forms for past-tense longer than their Typically Developing peers (e.g., walk instead of walked). Intervention efficiency increases when more verb types are used in treatment. Computational models have mostly relied on DLD-specific design to replicate this period and have yet to evaluate the properties of effective intervention. We propose that difficulty in recording infrequent associations limits DLD learning. We propose a minimal modification to the parameterization of a well-tested model of morphology learning that replicates and explains the extended bare form production and also makes predictions regarding which interventions are most effective. We provide the first account of the usability of computational models in developing DLD interventions. Our results suggest that DLD learning is delayed by a difficulty in tracing form-meaning association of lower frequency and points to distributional properties that support effective intervention that highlights this co-occurrence pattern.