 |

PAGE 2 OF 6
As you scan that New York street, your power of attention allows you to screen out irrelevant inputs and focus on small but important targets.
The brain, a wondrous supercomputer, calculates the direction, speed, and acceleration of passing people and approaching cars based on inputs of various types of motion-detecting cells. Other cells encode the jumble of colors, shapes, and patterns in this visual field, which higher-brain resources then transform into meaningful perceptions of city street life.
When your friend comes into view, certain key features of her face strike a match with those encoded in your facial-memory bank—a positive identification! When you and she reach out in greeting, a frenzy of mental computations in 3-D space guide both sets of hands and arms along trajectories, with on-the-fly midcourse corrections, to join in an embrace.
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
Recently, William Newsome has been interested in the processes that transform perceptual information into decisions for action. "Exactly where and how the sensory signals are evaluated to reach a categorical decision about an appropriate behavioral response is still quite mysterious," he says. Millions of neurons represent visual inputs in various parts of the brain, but only one or a limited number of actions can be taken.
Photo: Timothy Archibald
|
|
 |
 |
|
 |
|
 |
 |
 |
No wonder that nearly one-third of our higher brain, the cerebral cortex, is dedicated to making sense of what we see. Strictly speaking, what registers on the eye's retina is essentially light and shadow; the brain constructs all the rest. The welter of reflected light from thousands of sources that constantly floods the retina has to be captured, filtered, and processed at diverse places along the visual pathways of your brain to construct a perception in your mind's eye of what you are looking at. And then there's the depth problem: A 3-D world is projected onto your 2-D retinas, but the brain has to transform it into three dimensions.
Nearly one-third of our higher brain is dedicated to making sense of what we see.
Vision has been studied for centuries, though in fits and starts. The initial tracing of nerves from the eye to certain brain regions came in the 1600s, for example, and theories of color vision were also first proposed in that century. But a quantum leap in vision research came in the 1960s when David Hubel and Torsten Wiesel carried out experiments that would eventually win them a Nobel prize. These pioneers showed that they could record electrical activity from individual neurons "and that they could describe what turns the neurons on and learn about the nature of sensory representation at early stages of the visual hierarchy," says David C. Van Essen, a veteran vision researcher at Washington University in St. Louis School of Medicine and a member of the HHMI Scientific Review Board, who was a postdoctoral fellow under Hubel and Wiesel.
Since then, Van Essen says, "we have certainly made progress." There have been numerous discoveries about the wiring of the brain areas that process visual signals, particularly information about motion, and we understand better how the eye/brain system uses viewers' memories and emotions to help interpret what they see. But it may take many more years to fully understand the neural processes of vision. Scientists are ardently working toward that goal, however, and among them four HHMI investigators in particular are making major contributions.
|
 |
|