New findings hint that dendritic spines could make the human brain a far more efficient learning machine than that of other animals.
The tiniest wires that link neurons to one another probably serve a critical role in the brain's computational function. New data about how these wires, or dendritic spines, modulate their electrical properties and receive incoming signals is giving scientists a more complete view of their knack for acting as efficient mathematical calculators. Moreover, the findings also hint that these dendritic spines could make the human brain a far more efficient learning machine than that of other animals.
Howard Hughes Medical Institute investigator Rafael Yuste and professor of chemistry Kenneth B. Eisenthal, both at Columbia University, collaborated on the studies. Yuste, Eisenthal, and colleagues published their findings about dendritic spines in a trio of papers in the Proceedings of the National Academy of Sciences (PNAS).
We believe these findings have major implications for understanding synaptic function, which is at the heart of neuroscience.
During development, as the brain is laying down its intricate neural circuits, individual neurons must also be able to adjust their sensitivity to incoming signals so that they can process information. The new research sheds significant light on a century-old mystery of how dendritic spines contribute to this process, the scientists said.
Neurons are the wiring along which nerve signals are propagated throughout the brain and spinal cord. Chemical signals called neurotransmitters signal neighboring cells to initiate their own electrical impulses. These neurotransmitters are received along branching extensions of a nerve cell called dendrites. From each of the many dendrites on a nerve cell's surface sprouts a forest of mushroom-shaped spines. The head of each mushroom is covered with receptors that bind neurotransmitters that are launched in bursts across the synapse, the junction between nerve cells. Each spine has a filamentous neck that supports the head. These structures are ubiquitous: spines cover most neurons in the brain and are responsible for mediating close to 90 percent of all brain connections.
According to Yuste, although scientists have long recognized these spines' importance in neuronal signaling, little was known about their function. “There were traditionally two camps,” he said. “One said that the spines played an electrical role, and changes in the spine neck alter the input strength and are responsible for learning. The other said that something biochemical was going on in the spines to control signaling. And it's fair to say that ninety-nine percent of people currently believe the spines are doing a biochemical job. The idea that the spines were electrical devices had been discredited; it just fell out of favor. But all of this debate was based on indirect evidence and computer models, without any direct data.”
It has been difficult for researchers to study the function of dendritic spines, said Yuste, because the length of the microscopic structures is only about a hundredth the diameter of a human hair. Neurobiologists have traditionally used microelectrodes to explore electrical properties of whole neurons - but dendritic spines are far too small for insertion of even the finest of these.
To overcome that barrier, Yuste and his colleagues developed an entirely new imaging technique. Rather than using electrodes, they chose to fill neurons with a dye whose optical properties vary according to voltage. By measuring the dye's optical "second harmonic" generation, the researchers could determine voltages at any point along a neuron. Moreover, using this optical method, the team was able to measure, for the first time in history, the membrane potential in a dendritic spine. These new imaging technique was published in a paper in the January 17, 2006, issue of PNAS. Yuste's co-authors on the paper were Mutsuo Nuriya, Boaz Nemet, Jiang Jiang, and Kenneth Eisenthal.
In addition, the researchers also used a second optical technique to precisely trigger an electrical impulse in a dendritic spine. To do so, the scientists bathe brain tissue in the neurotransmitter glutamate, which has been chemically modified so that it is "caged," meaning it is not recognizable to the cell. The researchers then use an extremely precise laser beam to zap the head of a single spine, unleashing the glutamate only there and triggering an electrical signal restricted to the zapped spine.
Using these two optical techniques, the researchers then turned their attention to measuring how the length of a spine's neck affects its propagation of electrical impulses. They found that the spine neck acts as an electrical filter for signals from the spine head. Longer spine necks attenuate these signals more than do shorter necks, and the longest spines produced “silent” spines that did not propagate signals down their necks at all, even when triggered by glutamate. These silent spines, they said, could play a significant role in learning as they become active with experience. Overall, they concluded that spines are electrical devices and that their necks keep them electrically isolated from the rest of the spines in the neuron.
Those findings were published in a paper in the November 21, 2006, issue of PNAS. Yuste's co-authors on the paper were Roberto Araya, Jiang Jiang, and Kenneth Eisenthal.
In a third PNAS paper, the researchers studied the effects of stimulating two spines at the same time in order to understand the functional reason behind the electrical isolation of spines. They found that the spines allow a cell to detect each incoming signal individually. Signals from multiple spines are then added to one another to generate a summation signal in the dendrite. Interestingly, the summation of spine signals was linear. The neuron summed activated inputs just the way schoolchildren are taught: one plus one equals two. If, on the other hand, two activated inputs were located directly on the dendrite directly, instead of on spines, they interacted with each other: one plus one equaled less than two.
Therefore, Yuste explained, it appears that by making connections onto spines, neurons can add their inputs according to a simple arithmetic sum. This linear arithmetic of excitatory inputs had been observed before by Yuste and others, but this paper finally explained why it happened and in doing so, provided a functional logic for the spines. That paper was published online the week of November 20, 2006, in the PNAS Early Edition. Araya, Eisenthal, and Yuste were the authors of that paper.
Together, the three papers reveal dendritic spines as fundamentally important components of the brain as an adaptively learning, calculating machine, said Yuste. “Our findings are consistent with the hypothesis that the purpose of the filtering we discovered is to enable neurons to linearly integrate inputs without interfering with one another,” he said.
“We believe these findings have major implications for understanding synaptic function, which is at the heart of neuroscience,” said Yuste. “For one thing, it challenges a basic assumption that researchers have made about neurons -- that they can apply a voltage to a neuron and change the voltage of a synapse accordingly. Now we know that the whole array of spine lengths, with some electrically silent, can influence that synaptic voltage.”
More broadly, said Yuste, the findings suggest new ways of thinking about how the brain learns. “Our paper re-opens up the old idea that synaptic plasticity, or learning, could be related to changes in the length of the spine necks. Maybe long spines constitute a reservoir of synapses or circuits; and when an animal learns, the necks shorten and activate those synapses.” Perhaps significantly, noted the researchers, human brains have a larger population of long-necked spines than other animals, which might contribute to their enhanced learning ability.
Finally, Yuste said the new findings could offer deep insight into how the brain functions as a computational machine. “We are still in the dark ages with respect to understanding the logic of the brain's computational circuitry,” he said. “But given that more than 90 percent of brain inputs are mediated by spines, the fact that we have shown that these spines act to make the summation of inputs linear suggests to us that neural circuitry may have evolved to be linear. It makes sense: linear circuits are widely used in engineering because they are well-behaved. When I have discussed these issues with my electrical engineering and computer science colleagues, they tell me such properties offer major computational advantages.”