Brain Mechanisms of Visual Perception
Summary: Anthony Movshon is interested in how the brain encodes information about visual scenes and objects and decodes that information for the control of behavior.
The task of a sensory system is to provide faithful and useful representations of biologically relevant events in the external environment. In most cases, the raw signals transduced by receptors are woefully inadequate to this task because they are so removed and reduced from the stimuli that give rise to them. Yet our sensory systems calculate efficient and informative representations of the world. These representations are at the same time richer and simpler than the physical measurements that support them. They are richer because they contain representations of objects, states, and events that are abstracted from the primitive sensory signals; they are simpler because they distill the vast quantities of raw measurement information offered to the central nervous system by the sensory receptors. To understand sensory processing, we must appreciate both the computations used to encode information and the decoding of that information that gives rise to our sensory experience and guides our actions.
Encoding Visual Information
The visual system uses several strategies to encode the information present in the photoreceptor array. We study the outcomes of these strategic calculations by examining the responses of neurons in the cerebral cortex, where more than 30 distinct areas are known to have visual and visuomotor functions. We analyze the neuronal responses evoked by visual stimuli chosen to permit formal characterization of underlying neuronal computations. The most important properties concern the selectivity with which neurons respond to controlled stimulus variations along such dimensions as spatial structure, contrast, color, or movement. Because different cortical areas are linked in a hierarchical processing network, we also examine the neuroanatomical distribution and functional properties of neurons providing afferent signals to particular areas, to better understand the transformations of the visual signal computed by the circuits in each area.
An important encoding strategy used by many sensory systems is to respond to stimuli in a way that represents the relationship of a stimulus to other stimuli in the environment. By using this differential coding, significant events can be represented by relatively small numbers of signals. We have studied differential coding by exploring the effects of context on neuronal responses in visual cortex. The response of a cortical neuron is primarily determined by the visual stimuli that fall within its receptive field, the area of visual space within which stimuli can directly change the neuron’s firing rate. But responses are also importantly influenced indirectly by the recent history of stimulation and by stimuli falling outside the receptive field. We have been able to relate all these effects as a local normalization of cortical responses by gain control signals carried by the network of inhibitory interneurons in cortex. These signals modulate the sensitivity of neurons over local regions of space and time; they are, moreover, specifically organized to make visual encoding less redundant by removing correlations in spatiotemporal structure that are characteristic features of the images of natural scenes. We are exploring the generality of these findings across cortical areas and the biophysical mechanisms of synaptic integration that might implement these calculations in the brain.
Another approach is to use methods based on information theory to analyze the statistical efficiency with which neurons encode visual signals. These techniques, borrowed from visual psychophysics and optical engineering, hinge on obtaining estimates of the biological “noise” that degrades all sensory representations and in particular on deducing the relative importance of extrinsic and intrinsic sources of noise in limiting performance. It is theoretically possible to partition intrinsic noise into a peripheral component (transduction efficiency) and a central component (calculation efficiency). The efficiency with which different components of a visual signal are processed can be determined independently, providing a valuable tool for identifying brain representations carrying specific information about particular aspects of a stimulus. We have shown that the ability of visual neurons to detect and discriminate weak contrast signals is determined by their transduction efficiency, while their ability to extract information about form and motion from complex stochastic displays is limited by their calculation efficiency. We are studying the ways in which the structure of cortical computation determines this efficiency.
Decoding Visual Information
The visual system labors to encode information so that the resulting representations can be used effectively to support behavior. To understand this process requires knowledge of how higher centers decode visual signals. We approach this decoding problem in two ways, first by considering the relationship between visual signals and perceptual judgments and second by relating these signals to the performance of visually guided movements.
To relate visual neuronal responses as directly as possible to perception, we study the activity of cortical neurons in animals performing psychophysical judgments, such as detecting weak visual targets. In earlier collaborative work with William Newsome (HHMI, Stanford University), we showed that neurons in a particular visual cortical area, called MT or V5, give responses that are reliably associated with psychophysical judgments of the direction of motion. The existence and strength of this association provides information about which cortical signals are used to support particular visual judgments and about the nature of the calculations that intervene between the perceptual representation and the decision. We are now using this association signal to probe the cortical origins of the visual signals that support visual judgments, using other kinds of targets.
A major role of the visual system is to provide signals that help to control movements. We have been particularly interested in visual pursuit, the slow, smooth eye movements with which primates track moving visual targets. In collaboration with Stephen Lisberger (HHMI, University of California, San Francisco), we have studied the relationship between visual and visuomotor processes. We explored the responses of cortical neurons to the motion profiles used to characterize pursuit, and we are analyzing how the signals carried by these neurons can support the initiation and maintenance of pursuit. To this end, we are developing computational models designed to explain the signal transformations that take place at the series of 10 or so synaptic stages that intervene between the initial registration of the visual image and the formulation of the final movement command.
The retina is attached to the eye, so eye movements change the signals encoded by the visual system. This feedback influences both the perception of motion and the control of pursuit eye movements: once the eye is accurately tracking a visual target, its image no longer moves on the retina, yet it appears to move, and pursuit tracking is well maintained. We are studying the signals that maintain perception and movement by measuring the visual responses of cortical neurons to targets presented during pursuit eye movements. In at least one cortical area, called MST, targets that are identical on the retina elicit different responses when the eye is still and when it moves. The differences in response provide information about the way the visual system calculates and compensates the effects of the observer’s own movements.
These studies of visual perception and visually guided movement offer a view of visual function in terms of its outputs, from the top down, to complement the view in terms of inputs that comes from bottom-up studies of neural encoding.
Last updated November 13, 2000