Skip to main content
Skip to main content

New work from Ellen and her students

February 02, 2023 Linguistics

Professor Ellen Lau, grinning with eyes cast downward in thought, standing in front of a window and a framed picture of travel

On predication, relative clauses, wordhood, and word recognition.

The first few weeks of 2023 have seen four new publications involving Ellen Lau, with student, alumni and faculty collaborators, on a remarkable range of topics. 

In The binding problem 2.0: Beyond perceptual features, Ellen joins Xinchi Yu to argue that "the binding problem," of  how we correctly represent the correspondence between a feature and the object that has it, extends across cognitive science, beyond the domain in which it has most been discussed, namely vision, laying out the hypothesis that the same resources might be involved in representing reference and predication in language comprehension.

Then in "Moving away from lexicalism in psycho- and neuro-linguistics," Ellen joins Alex Krauska to limn the consequences of lexicalism in psycho- and neuro-linguistics, where a simple notion of "wordhood", not supported by linguistic analysis, has nevertheless shaped models of language production and production disorders.

In "A subject relative clause preference in a split-ergative language: ERP evidence from Georgian," a team led by Ellen and Masha Polinsky, with former RAs Nancy Clarke (now at Amazon Web Services) and Michaela Socolof (now a PhD student at McGill), plus Rusudan Asatiani from Tblisi State University, provide a series of ERP and self-paced reading studies on Georgian to show that subject-relative clauses may be easier to process even in an ergative language, where the subject of a transitive clause is not a 'morphologically unmarked' position.

And finally, if that's not enough for you, add one more, led by alum Phoebe Gaston, with Colin Phillips, former postdoc Christian Brodbeck, and once again Ellen Lau, on how "Auditory Word Comprehension is Less Incremental in Isolated Words," which marshals MEG data to show that "rapid and automatic of successively higher-level representations of [structures realized by] word[form]s," while effective in continuous speech, "is limited when word[form]s are heard in isolation."