Mathematics holds the key to understanding how we think

The structural unit of the brain, the neuron, has been long studied in order to understand how our brain works. Essential to an understanding of the brain is an understanding of the signaling taking place within it. Dendrites are treelike extensions at the beginning of a neuron from where the incoming electrical and chemical signals are joined and passed on to the soma. The question of how neurons, individually and in networks, process incoming information has been addressed largely from an electrical engineering perspective, by modeling the axon as a cable and evaluating the flow using differential equations.[1][2] However, a thorough study of information processing at a more fundamental level, that of the dendrites, remains undone, mainly because of the physical constraints of recording data from a single dendrite.

Most neurons have elaborate dendritic trees that receive tens of thousands of synaptic inputs. What happens to the input at the tips of the dendrites is important because the integration of many synaptic responses can be important in depolarizing neurons to action potential threshold.[3] Think of it as several incoming weak signals that combine to result in a strong response from the neuron. The dendrites play an important role in such integration because they converge inwards to the soma.[4]

1

Figure 1.
The current research models an active dendritic the tree, i.e., the tree would exercise different physiological properties on the incoming signal. We constructed a binary dendritic tree based on a fractal pattern. A fractal is a self-similar but never-ending pattern; for example, divide a line segment into half, divide each half into half, each quarter into halves, and so on.[5] In Figure 1, the nodes c and y show the tips of two dendrites, and x represents the soma of the cell.

A random number generator, which does exactly what the name says, was used to generate voltage values at the first branch tips and the voltage modeled as beginning at the tips and propagating inwards towards the soma. The tree includes other features viz., i) as the voltage propagates through the tree, it undergoes resistance and as it moves closer to the soma, the resistance decreases since the diameter of the branch increases; and ii) the value of the voltage at the tip of two nodes is taken as the average of the two tip values.

The tree simulation is tested at various fixed and random outputs to generate a final average voltage value (hereafter called vav) between 0–1. The tree simulation was run 500 times to generate 500 trial results of the range of values that reach the soma after being acted upon by the binary dendritic tree created. The data is then binned using equal-width partitioning, i.e., for 500 trials, 50 bins were created so that each bin would theoretically have 10 values.

2

Figure 2.
From an evolutionary perspective, it can be argued that the most efficient neural coding would require the minimum possible activity of the dendrites, since all activity requires energy. Thus, we thought it would be of importance to find out the effect of introducing a zero as one of the branch tips and generating a random number between 0-1 at the other branch tips and how it would change the vav. The results suggested that when the number of zeroes introduced (hereafter called zeronum) are small (< 30), the vav output is similar; however the output starts to show a change when zeronum is large, depending on whether the zeroes are clustered together or are spread apart. Thus, a set of experiments was carried out where the voltage value at the tips zeroed out was randomly spread across branch tips or placed in succession. Intuitively, it would be expected that the zeroes clustered together would persist for longer and result in a smaller vav, as opposed to the condition when zeronum is spread out randomly. This however, does not appear to be the case: see Figure 2.

For the case when random branch tips were minimized at very high resistance values, it can be observed that the vav final output lies in three distinct regions. For the case when successive branch tips were minimized, at very high resistance values, it can be observed that the vav final output lies in two distinct regions (pattern seen in Figure 2). This added interest to the question of how important the location of the minima is on the tree.

3

Figure 3.
 We hypothesized that, in going from successive to random distribution of zeroes, at some point the two cycle would be changing to a three-cycle. Thus, we simulated the tree with varying loci of the minima, viz. i) successive minima at the start of the tree, ii) successive minima in the middle of the tree, iii) 2 clusters of successive minima distributed at two ends of the tree, iv) 3 clusters of minima location at a distance of 1/3 from each other on the tree, and v) randomly distributed minima. The findings are summarized in Figure 3.

The preliminary results suggest that when minima are placed in a clustered manner, there are two regions in which the vav values lie. This can be interpreted to be the firing and non-firing, or the ON and OFF regions, i.e., if the vav value is in the ON or the non-zero region of the graph, the soma fires and if it is in the OFF region, there is no spiking. The results get more complex when there are three regions in which vav values fall in the condition of the randomly distributed minima. Although from an information theory perspective, having three regions to store information in implies higher information capacity, such a claim requires experimental verification.

The findings from this study can advance the understanding of sensory information processing from the single cell to the network level. Very little is known about how the biophysical properties of single neurons are actually used to implement specific computations. The role of dendrites in such a computation also remains to be investigated. Therefore, through the mathematical analysis of signal propagation through a dendritic tree, the mechanisms underlying information processing by neurons and neuronal networks can become clear.

References

  1. I. Segev, What do dendrites and their synapses tell the neuron?, J. Neurophysiol. 95, pp. 1295–1297, 2006.
  2. W. Gerstner and W. M. Kistler, Spiking Neuron Models. Single Neurons, Populations, Plasticity, Cambridge University Press, 2002.
  3. A. T. Gulledge, B. M. Kampa, and G. J. Stuart, Synaptic integration in dendritic trees, J. Neurobiol. 64, pp. 75–90, 2005.
  4. P. Jones and F. Gabbiani, Logarithmic compression of sensory signals within the dendritic tree of a collision-sensitive neuron, J. Neurosci. 32, pp. 4923–4934, 2012.
  5. R. S. Strichartz, Differential Equations on Fractals, Princeton University Press, 2006.

Books