Electrical activity in the central nervous system (CNS) provides a rich and remarkably precise source of information about the external world, enabling animals to distinguish molecules that differ by a single atom, wavelengths of light that vary by less than 10 billionths of a meter, and sounds originating from nearly identical locations. However, different combinations of chemical, visual, or auditory stimuli can elicit activity in the CNS that generates indistinguishable percepts, a property that artists and chefs (among others) exploit frequently. These observations raise (at least) two important questions: What enables, and what limits, the fidelity with which neural activity in the CNS encodes information about characteristics of sensory stimuli?
To answer these questions we identify mechanisms that govern the specificity with which neurons and neural circuits respond to features of sensory stimuli. We focus on systems in which neural activity can be modulated precisely and reproducibly by sensory stimuli and the source of synaptic input a given neuron receives is known or highly constrained. This approach enables us to control and measure synaptic input elicited by physiological stimuli, quantify the pattern of action potentials (APs) triggered by this synaptic input, and identify the circuit and cellular properties that determine how patterns of APs in specific neurons are related to specific properties of sensory stimuli.
Cellular Substrates of Receptive Fields
The electrical activity of neurons in many sensory systems is governed by stimuli emanating from particular regions of space. The organization, shape, and size of these regions, called receptive fields, provide an important constraint on the spatial resolution with which the origin of sensory information can be determined—i.e., given the same degree of overlap between adjacent receptive fields, spatial resolution increases as receptive field size decreases.
Receptive field size varies considerably (and often predictably) within a class of neuron as well as between distinct classes of neurons. These differences likely reflect, in part, anatomical features of particular types of neurons and/or the circuits that provide input to them. The relative degree to which intrinsic biophysical properties contribute to (1) a given neuron's receptive field and (2) differences between neurons' receptive fields, however, has not been examined in detail; such properties could influence significantly the region(s) of space from which light stimuli, for example, elicit neural activity in one type of neuron but not another.
Examining light-evoked activity in retinal ganglion cells (RGCs), the neurons through which information about all light stimuli is transmitted to the brain, provides an unusually good opportunity to distinguish the relative degree to which circuit, synaptic, and cellular properties underlie RGC receptive fields. Decades of detailed anatomical studies, and new techniques to label and manipulate specific sets of neurons, enable us to measure and control the source and properties of signals that a given RGC receives in response to physiological stimuli. Additional techniques—e.g., simultaneous patch-clamp recordings from multiple neurons—permit us to control the temporal and spatial properties of synaptic input more precisely than is possible with light stimuli alone. Utilized together, these biological and technical features will enable us to parse the relative degree to which synaptic and cellular properties contribute to RGC receptive fields.
A similar approach will also help us to identify mechanisms that govern the range and specificity of light stimuli to which neurons downstream of the retina respond. In collaboration with Jeff Magee (HHMI, JFRC), we will determine to what degree differences in the receptive fields of neurons in the superior colliculus reflect properties of the neurons themselves, the collicular networks in which they are embedded, and/or the characteristics and source of synaptic input they receive from RGCs. These studies will help (1) characterize the propagation and transformation of signals through multiple levels of the early visual system and (2) identify the precise circuit and cellular mechanisms that govern the sets of stimuli that do and do not elicit activity in individual neurons.
Mechanisms Underlying Linear and Nonlinear Spatial Summation
Neurons throughout the CNS can be distinguished by a number of functional features, including the linearity with which spatially and/or temporally distributed inputs summate. Different classes of RGCs in the mammalian retina, for example, exhibit profound differences in the degree to which spatially segregated light stimuli within the receptive field summate linearly—i.e., distinct patterns of light that, on average, are the same intensity produce robust activity in some RGCs and little activity in others. The degree to which anatomical, circuit, and cellular/synaptic properties govern these (and other) differences has not been tested directly.
Our approach will be to identify the neurons that provide input to distinct classes of RGCs, measure the response of these cells to light stimuli, and use a variety of techniques to manipulate the precise spatiotemporal pattern of excitatory and/or inhibitory synaptic input that RGCs receive. These experiments will help distinguish the degree to which distinct forms of spatial summation observed in the output of RGCs reflect differences in the cellular properties of RGCs and/or properties of the circuits that provide synaptic input to RGCs. More generally, these studies will help determine exactly how linear and nonlinear transformations of sensory input—computations that occur throughout the CNS—are implemented by circuit and cellular processes.
As of November 12, 2009