Foundational Papers in Complexity Science pp. 2251–2267
DOI: 10.37911/9781947864559.71
What a Neuron Says and How It Says It
Author: Vijay Balasubramanian, University of Pennsylvania; Santa Fe Institute
Excerpt
Neurons are information-processing devices. Their inputs are environmental signals like light and pressure, or chemical ligands like volatile molecules or neurotransmitters released by pre-synaptic neurons in a circuit. These signals activate receptors embedded in the cell membrane, often on a neuron’s dendritic arbor, triggering currents that change the membrane potential. In some neurons, a sufficient increase in the potential triggers a sharp voltage “spike” that propagates down the axon, leading to neurotransmitter release at the output synapses and transmission to the postsynaptic neurons. The net effect is that the neuron reconfigures and transmits some information in its inputs to the next stage.
In the mid-twentieth century, scientists realized that if we regard neurons in this way, communications theory should provide a framework for understanding the organization and function of neural circuits. In particular, the brain sciences quickly absorbed Claude Shannon’s invention of information theory in 1948. This led to the seminal works of Fred Attneave in the mid-1950s and Horace Barlow in the early 1960s, which proposed applying ideas from information theory to interpreting sensory perception and its neural basis. For example, Barlow proposed that the lateral inhibition seen ubiquitously in early mammalian sensory circuits provided a mechanism for removing correlations in natural stimuli, thereby compressing them for maximal transmission through bottlenecks like the optic nerve.
Bibliography
Atick, J. J., and A. N. Redlich. 1992. “What Does the Retina Know about Natural Scenes?” Neural Computation 4 (2): 196–210. https://doi.org/10.1162/neco.1992.4.2.196.
Attneave, F. 1959. Applications of Information Theory to Psychology: A Summary of Basic Concepts, Methods, and Results. New York, NY: Holt, Rinehart, and Winston.
Barlow, H. B. 1961. “Possible Principles Underlying the Transformation of Sensory Messages.” In Sensory Communication, edited by W. A. Rosenblith, 217–233. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/9780262518420.003.0013.
Bialek, W., and A. Zee. 1990. “Coding and Computation with Neural Spike Trains.” Journal of Statistical Physics 59:103–115. https://doi.org/10.1007/bf01015565.
Koch, K., J. McLean, R. Segev, M. A. Freed, M. J. Berry, V. Balasubramanian, and P. Sterling. 2006. “How Much the Eye Tells the Brain.” Current Biology 16 (14): 1428–1434. https://doi.org/10.1016/j.cub.2006.05.056.
Laughlin, S. 1981. “A Simple Coding Procedure Enhances a Neuron’s Information Capacity.” Zeitschrift für Naturforschung C 36 (9–10): 910–912. https://doi.org/10.1515/znc-1981-9-1040.
Meister, M., and M. J. Berry. 1999. “The Neural Code of the Retina.” Neuron 22 (3): 435–450. https://doi.org/10.1016/s0896-6273(00)80700-x.
Ratliff, C. P., B. G. Borghuis, Y. H. Kao, P. Sterling, and V. Balasubramanian. 2010. “Retina is Structured to Process an Excess of Darkness in Natural Scenes.” Proceedings of the National Academy of Sciences 107 (40): 17368–17373. https://doi.org/10.1016/j.neuron.2014.08.040.
Rieke, F., D. Warland, R. de Ruyter van Steveninck, and W. Bialek. 1999. Spikes: Exploring the Neural Code. Cambridge, MA: MIT Press.
Schneidman, E., M. J. Berry, R. Segev, and W. Bialek. 2006. “Weak Pairwise Correlations Imply Strongly Correlated Network States in a Neural Population.” Nature 440 (7087): 1007–1012. https://doi.org/10.1038/nature04701.
Shannon, C. E. 1948. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27:379–423, 623–656.
Sterling, P., and S. Laughlin. 2015. Principles of Neural Design. Cambridge, MA: MIT Press.
Teşileanu, T., S. Cocco, R. Monasson, and V. Balasubramanian. 2019. “Adaptation of Olfactory Receptor Abundances for Efficient Coding.” eLife 8:e39279. https://doi.org/10.7554/eLife.39279.