With a collection of tools for rapidly processing changing visual cues, the eyes and brain work together to create meaningful images from raw signals.
In much less time than it takes to point and shoot with a new multi-megapixel digital camera, your eyes and brain have already scanned the landscape, identified crucial objects, put them in proper focus, and selected the optimal contrast for the scene in front of you. This all happens without your thinking much about what you're doing, let alone how you're doing it.
When a sunny day dissolves into a moonless night, the light intensity changes a billion-fold, yet your eyes and brain are able to perceive the critical information in both situations. How the eyes and brain weave a swirl of raw signals into a perception of coherent, identifiable objects is the kind of question that keeps Howard Hughes Medical Institute (HHMI) investigator Fred Rieke and HHMI senior research specialist Michael Rudd awake at night. Both scientists have been striving to understand the amazing feats of biological computation and sensory adaptation that allow the eyes and brain to collaborate flawlessly. In the December 10, 2009, issue of the journal Neuron, Rieke and Rudd describe the mechanisms that evolution has put in place to process rapidly changing visual cues and enable the brain to respond appropriately.
Even the most sophisticated digital cameras today cannot cope with the range of inputs your visual system can handle.
Rieke wants to know how vision works—specifically, the retina's role in how we see in starlight and continue to see on a bright sunny day. To learn about visual sensitivity, he tracks the signals evoked by visual stimuli across the retina, from sensory structures known as rods and cones through different retinal cell layers. He hopes to learn how that circuitry helps the eye adapt to changes in light intensity.
Rieke has spent roughly 10 years developing techniques to record light-evoked electrical signals from every retinal cell type. Unlike many neuroscientists, who avoid studying hard-to-measure sensory noise, he designs experiments that feature both noise and the sensory signal of interest. In doing so, Rieke gains a broader understanding of the relationship between neural circuitry and behavior in the visual system.
“Even the most sophisticated digital cameras today cannot cope with the range of inputs your visual system can handle,” Rieke notes. Photography is an exercise in compromise: a photographer must choose which aspects of a scene should be included in each photograph. For example, long exposures will capture more features in dimmer or shadowy regions of a scene -- but the tradeoff is that brighter regions of the scene may look washed out. The eye doesn’t have to make such compromises—at least, not nearly as much as a camera.
Unlike a camera’s shutter setting, which has a global effect on a swatch of film or on all of a digital sensor's millions of pixels, the retina is built with a bevy of biophysical and cellular mechanisms that enable its sensory elements to adapt “quickly, locally, and reliably” under changing conditions, Rieke and Rudd point out. If a camera’s sensor were more retina-like, small groups of pixels would independently and automatically adjust their own exposure and gain (amplification) to optimize the sensor’s ability to image the most important features of a scene. That, of course, would depend on the sensor knowing—like the eye—which features were the most relevant or important. In animals, the experience-wizened brain attached to retinas strongly weighs in on those judgments, Rieke notes.
“We don’t know the rules by which the visual system optimizes itself,” Rieke says. But over the past 20 to 30 years, researchers have identified an ever-expanding repertoire of mechanisms that the retina uses to adapt to rapidly changing visual information. Color-sensing cone cells, for one, have a calcium-mediated feedback mechanism that alters how trigger-happy they are in response to incoming light, Rieke says. Similarly, it can take the eyes several minutes to adjust to a dark environment because the amounts, location, and chemical forms of retinal molecules must change before the low-light sensitive rod cells can take over.
There also are several intercellular adaptation mechanisms. For example, in low light, a thousand-fold increase in the number of photons bathing cone cells doesn’t change their responses, or even the responses of their cellular neighbors, the bipolar cells. But the ganglion cells that follow do dial down their responses. What’s more, this and other kinds of adaptations occur in localized pixel-like patches of the retinal “screen,” thereby providing the retina with a degree of optical-massaging finesse that digital cameras now cannot offer.
“Multiple adaptational mechanisms in different locations within the retina act in concert,” the scientists conclude in their Neuron paper. In time, Rieke says, it might be possible to discern enough detail about the relationships between visual input and the outputs and adaptations of the retina’s vast network of cells to uncover the biophysical computational bases by which our familiar, everyday visual experience is made possible. It’s the sort of fundamental knowledge that could lead to advances in technologies like machine vision, robotics, imaging, and photography.