Skip to main content
Skip to main content

Bob Frank / Abstraction in linguistic knowledge in humans and machines

Close portrait of a man's face in three-quarter profile, wearing glasses, about to talk through a half smile.

Bob Frank / Abstraction in linguistic knowledge in humans and machines

Linguistics | Maryland Language Science Center Friday, November 15, 2024 3:00 pm - 4:30 pm H. J. Patterson Hall, 2130

Friday November 15, our colloquium speaker is Bob Frank, Professor of Linguistics at Yale, talking about "Abstraction in linguistic knowledge in humans and machines." LLMs need "unfathomable quantities of data to accomplish what children do with much less," he will show, due to the absence of "an inductive bias tuned to the nature of human language." 


Abstraction in linguistic knowledge in humans and machines

The ability to produce and comprehend language in creative and productive ways has long been taken to be uniquely human. Yet over the past few years, in the eyes of the public, popular press, and some members of the scientific community, this state of affairs has changed. Large language models (LLMs) apparently show such a remarkable ability to use language that discussions of their strengths and weaknesses focus not on their linguistic capacities, which are presumed to be impeccable, but rather on their ability to speak truthfully, solve graduate-level math problems and avoid expressing racist opinions.  In this talk, I will argue that this presumption of human-like linguistic competence in LLMs is premature. I will report on case studies that point to a fundamental difference between human and LLM linguistic competence: humans exhibit a capacity for grammatical abstraction that trained LLMs simply lack. The lack of such abstraction in LLMs stems, I argue, from the absence of an inductive bias tuned to the nature of human language, which leads LLMs to need unfathomable quantities of data to accomplish what children do with much less input. Achieving linguistic parity will require instilling LLMs with appropriate inductive bias, and I will speculate on what form this might take.

Add to Calendar 11/15/24 15:00:00 11/15/24 16:30:00 America/New_York Bob Frank / Abstraction in linguistic knowledge in humans and machines

Friday November 15, our colloquium speaker is Bob Frank, Professor of Linguistics at Yale, talking about "Abstraction in linguistic knowledge in humans and machines." LLMs need "unfathomable quantities of data to accomplish what children do with much less," he will show, due to the absence of "an inductive bias tuned to the nature of human language." 


Abstraction in linguistic knowledge in humans and machines

The ability to produce and comprehend language in creative and productive ways has long been taken to be uniquely human. Yet over the past few years, in the eyes of the public, popular press, and some members of the scientific community, this state of affairs has changed. Large language models (LLMs) apparently show such a remarkable ability to use language that discussions of their strengths and weaknesses focus not on their linguistic capacities, which are presumed to be impeccable, but rather on their ability to speak truthfully, solve graduate-level math problems and avoid expressing racist opinions.  In this talk, I will argue that this presumption of human-like linguistic competence in LLMs is premature. I will report on case studies that point to a fundamental difference between human and LLM linguistic competence: humans exhibit a capacity for grammatical abstraction that trained LLMs simply lack. The lack of such abstraction in LLMs stems, I argue, from the absence of an inductive bias tuned to the nature of human language, which leads LLMs to need unfathomable quantities of data to accomplish what children do with much less input. Achieving linguistic parity will require instilling LLMs with appropriate inductive bias, and I will speculate on what form this might take.

H. J. Patterson Hall false