Skip to main content
Skip to main content

Jeff with Team Most in Psychological Science

January 30, 2024 Linguistics

Professor Jeff Lidz, sitting on the floor with a girl, maybe 7 years old, playing games that teach her about the psychology of counting

Visual apprehension of magnitude is scaled relative to the perceived top and bottom.

From the new issue of Psychological Science, we learn that "Observers Efficiently Extract the Minimal and Maximal Element in Perceptual Magnitude Sets," thanks to our friends in Team Most: Darko Odic (UBC) and Justin Halberda (Hopkins) with Tyler Knowlton *21, Alexis Wellwood *14, Paul Pietroski and Jeff Lidz. The abstract is below.


The mind represents abstract magnitude information, including time, space, and number, but in what format is this information stored? We show support for the bipartite format of perceptual magnitudes, in which the measured value on a dimension is scaled to the dynamic range of the input, leading to a privileged status for values at the lowest and highest end of the range. In six experiments with college undergraduates, we show that observers are faster and more accurate to find the endpoints (i.e., the minimum and maximum) than any of the inner values, even as the number of items increases beyond visual short-term memory limits. Our results show that length, size, and number are represented in a dynamic format that allows for comparison-free sorting, with endpoints represented with an immediately accessible status, consistent with the bipartite model of perceptual magnitudes. We discuss the implications for theories of visual search and ensemble perception.