DOC PREVIEW
MIT HST 722 - Information theory and neural coding

This preview shows page 1-2-3-4 out of 11 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

reviewnature neuroscience • volume 2 no 11 • november 1999 947The brain processes sensory and motor information in multiplestages. At each stage, neural representations of stimulus features ormotor commands are manipulated. Information is transmittedbetween neurons by trains of action potentials (spikes) or, less fre-quently, by graded membrane potential shifts. The ‘neural code’refers to the neural representation of information, and its study canbe divided into three interconnected questions. First, what is beingencoded? Second, how is it being encoded? Third, with what preci-sion? Neurophysiologists initially approached these questions bymeasuring stimulus–response curves, using mainly static stimuli.The stimulus (x-axis) indicates what is being encoded, the response(y-axis) and the curve’s shape determine how it is being encoded,and error bars indicate the code’s precision. By using different stim-ulus ensembles and different response measures, one can begin toanswer questions one and two. The precision of the code is implic-it in the variance but has also been addressed directly by quantify-ing how well stimuli can be discriminated based on neural responses.Measuring neural reliability is important for many reasons relat-ed to how the three questions interconnect. The crucial first ques-tion cannot be answered directly but will always depend on theinvestigator’s intuition and experience in choosing relevant stimulusparameters. Moreover, how such parameters vary in the chosen stim-ulus ensemble can lead to different results. For example, an audito-ry physiologist interested in frequency tuning might obtain differentresults from pure tones versus white noise. One way to validate thechoice of stimulus parameters and ensemble is to compare behav-ioral performance to the best performance possible by an idealobserver of the neural data. A match between behavioral and neur-al discrimination suggests that the chosen encoding description isrelevant and perhaps directly involved in generating behavior1–3.Information theory, the most rigorous way to quantify neuralcode reliability, is an aspect of probability theory that was devel-oped in the 1940s as a mathematical framework for quantifyinginformation transmission in communication systems4. The theo-ry’s rigor comes from measuring information transfer precisionby determining the exact probability distribution of outputs givenany particular signal or input. Moreover, because of its mathe-matical completeness, information theory has fundamental the-orems on the maximum information transferrable in a particularcommunication channel. In engineering, information theory hasbeen highly successful in estimating the maximal capacity of com-munication channels and in designing codes that take advantage ofit. In neural coding, information theory can be used to preciselyquantify the reliability of stimulus–response functions, and its use-fulness in this context was recognized early5–8.We argue that this precise quantification is also crucial for deter-mining what is being encoded and how. In this respect, researchershave recently taken greater advantage of information-theoretictools in three ways. First, the maximum information that couldbe transmitted as a function of firing rate has been estimated andcompared to actual information transfer as a measure of codingefficiency. Second, actual information transfer has been measureddirectly, without any assumptions about which stimulus parame-ters are encoded, and compared to the necessarily smaller estimateobtained by assuming a particular stimulus–response model. Suchcomparisons permit quantitative evaluation of a model’s quality.Third, researchers have determined the ‘limiting spike timing pre-cision’ used in encoding, that is, the minimum time scale overwhich neural responses contain information. We review recentwork using some or all of these calculations9–16, focusing on thegoodness of simple linear models commonly used to describe howsensory neurons encode dynamic stimuli. We conclude that thesemodels often capture much of the transmitted information, andthat each spike carries information.Information-theoretic calculations also show that certain neu-rons use precise temporal (millisecond) spiking patterns inencoding. Precise spike timing had previously been identified inthe auditory system, where it is important for sound localiza-tion17and echolocation18, and also more recently elsewhere inthe CNS19. The interesting question is whether spike timing pre-cision is greater than necessary to encode the stimulus. Newinformation-theoretic techniques address that question by quan-tifying spiking precision and comparing it to the minimal pre-cision required for encoding in a variety of sensory systems2,9,13–16.Information theory and neuralcodingAlexander Borst1and Frédéric E. Theunissen21ESPM-Division of Insect Biology and 2Dept. of Psychology, University of California, Berkeley, California 94720, USACorrespondence should be addressed to A.B. ([email protected])Information theory quantifies how much information a neural response carries about the stimulus. Thiscan be compared to the information transferred in particular models of the stimulus–response functionand to maximum possible information transfer. Such comparisons are crucial because they validateassumptions present in any neurophysiological analysis. Here we review information-theory basics beforedemonstrating its use in neural coding. We show how to use information theory to validate simple stimu-lus–response models of neural coding of dynamic stimuli. Because these models require specification ofspike timing precision, they can reveal which time scales contain information in neural coding. Thisapproach shows that dynamic stimuli can be encoded efficiently by single neurons and that each spikecontributes to information transmission. We argue, however, that the data obtained so far do not suggesta temporal code, in which the placement of spikes relative to each other yields additional information.© 1999 Nature America Inc. • http://neurosci.nature.com© 1999 Nature America Inc. • http://neurosci.nature.com948 nature neuroscience • volume 2 no 11 • november 1999We contrast the role of precise spiking in encodingdynamic stimuli to its potential role in situationswhere the stimuli do not vary rapidly in time, so thatprecise spike patterns could carry additional infor-mation


View Full Document

MIT HST 722 - Information theory and neural coding

Documents in this Course
Load more
Download Information theory and neural coding
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Information theory and neural coding and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Information theory and neural coding 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?