How the brain spots a friendly face in the crowd

Crowd at Lollapalooza

When you're looking for a friend in a crowd, think about the kind of clues he might give you over the phone to help spot him. "I'm wearing a red shirt," he might say. "I'm in the back by the railing," or "I'm standing up waving my hand." It helps to focus on these details to narrow down the search. You can't process everything you see all at once, and it would take forever to scan a large crowd one face at a time. Neuroscientists call this attention – attending to specific features or positions in your field of vision primes the brain to notice them more quickly.

In the brain, some areas of the visual cortex specialize in catching non-spatial features, like color, shape or orientation; others specialize in extracting spatial cues such as direction of movement or the position of an item. The brain has to assemble these various inputs to make decisions about objects in our field of vision, and according to research by University of Chicago neuroscientists, both space-based and feature-based attention influence each other to help us find what we're looking for, whether it's a stoplight turning red or a friend waving his hand in the crowd.

In 2014, David Freedman, PhD, professor of neurobiology, and Guilhem Ibos, PhD, a postdoctoral scholar in Freedman's lab, identified a brain region called the lateral intraparietal area (LIP) that assembles and processes visual information based on feature-based attention-things like color and motion. In that study, they shed light on a unique characteristic of neurons in LIP and how they respond to visual stimuli.

Individual neurons shift their selectivity to color and direction depending on the task at hand. In experiments with monkeys, if the subject was looking for red dots moving upward, for example, a neuron would respond strongly to movement similar to upward motion and to colors close to red. If the task was switched to another color and direction seconds later, that same neuron would be more responsive to the new combination.

In the new study, published August 17, 2016 in Neuron, Ibos and Freedman added space-based attention to the mix. Monkeys were again trained to look for dots of a certain color moving in a certain direction, but this time they had to be in a specific area of the display too. So instead of just looking for red dots moving upward, they had to look for red dots moving upward in the top right corner of the screen, and ignore stimuli located elsewhere.

Individual neurons are tuned to different areas of visual space (called receptive-fields). If a given image is divided up into squares, each neuron would respond to a different square. In the experiments, Ibos and Freedman recorded the activity of individual neurons as the monkeys performed the tasks, and saw that attention to the spatial position of a stimulus strongly influenced the feature selectivity of LIP neurons. If the subject was looking for red dots moving upward in the receptive field of the recorded neuron, the effects of feature-based attention on LIP neurons were larger than if subjects attended the same stimuli located outside its receptive field.

"It's one of the first times that both kinds of attention have been looked at together," said Freedman. "This particular part of the brain [LIP] seems to be very flexible, changing what it responds to depending on what it is you're looking for at that moment."

The results appear to show that the LIP integrates attention-based inputs from earlier in the brain's visual processing system. For example, an area in the visual cortex called V4 responds strongly to colors and other features, and an area called MT responds to direction of movement. The LIP puts these together to help the brain find what it's looking for based on color, direction of movement, and position in space.

"It's more like the LIP reads out the activity of the visual cortex and uses that to make a decision," Ibos said. "It's not really the place where the attention is created, it's more the place where the attention-based neuronal signal is processed."

Ibos said the next step is understanding how the LIP assembles these pieces of information, and where it fits into the larger decision-making process.

"Why does the LIP integrate these signals? How does it cooperate with other cortical areas? We believe that it plays an important role in detecting whether the stimulus was a target or was not a target," he said. "The next part of the study is understanding how the LIP integrates, combines and computes this kind of information in order to make decisions."

Matt Wood
Matt Wood

Matt Wood is a senior science writer at UChicago Medicine and the Biological Sciences Division.