MIT HST 722 - Information theory in auditory research

Unformatted text preview:

Information theory in auditory researchIntroductionWhy mutual information?The Mutual Information as a measure of stimulus effectUseful properties of the MISymmetryScaleThe information processing inequalityOther interpretations of MIPractice of mutual information estimationApplications to the auditory systemConcluding remarksAcknowledgementsReferencesResearch paperInformation theory in auditory researchIsrael Nelkena,*, Gal ChechikbaDepartment of Neurobiology and The Interdisciplinary Center for Neural Computation, Silberman Institute of Life Sciences, Safra Campus,Givat Ram, Hebrew University, Jerusalem 91904, IsraelbComputer Science Department, Stanford University, Stanford CA 94305, USAReceived 10 September 2006; received in revised form 22 November 2006; accepted 3 January 2007Available online 16 January 2007AbstractMutual information (MI) is in increasing use as a way of quantifying neural responses. However, it is still considered with somedoubts by many researchers, because it is not always clear what MI really measures, and because MI is hard to calculate in practice.This paper aims to clarify these issues. First, it provides an interpretation of mutual information as variability decomposition, similarto standard variance decomposition routinely used in statistical evaluations of neural data, except that the measure of variability isentropy rather than variance. Second, it discusses those aspects of the MI that makes its calculation difficult. The goal of this paperis to clarify when and how information theory can be used informatively and reliably in auditory neuroscience.Ó 2007 Elsevier B.V. All rights reserved.Keywords: Auditory system; Information theory; Entropy; Mutual information; Variance decomposition; Neural code1. IntroductionIn recent years, information-theoretic measures areincreasingly used in neuroscience in general, and in audi-tory research in particular, as tools for studying and quan-tifying neural activity. Measures such as entropy andmutual information (MI) can be used to gain deep insightinto neural coding, but can also be badly abused. Thispaper is an attempt to present those theoretical and practi-cal issues that we found particularly pertinent when usinginformation-theoretic measures in analyzing neural data.The experimental context for this paper is that of mea-suring a stimulus–response relationshi p. In a typical exper-iment, a relatively small number of stimuli (<100) arepresented repeatedly, typicall y 1–100 repeats for each stim-ulus. The main experimental question is whether the neuro-nal activity was different in response to the different stimuli.If so, it is concluded that the signal whose activity is mon-itored (single-neuron responses, evoked potentials, opticalsignals, and so on) was selec tive to the parameter manipu-lated in the experiment.The MI is a measure of the strength of associationbetween two random variables. The MI, I(S; R), betweenthe stimuli S and the neural responses R is defined in termsof their joint distribution p(S,R). When this distribution isknown exactly, the MI can be calculated asIðS; RÞ¼Xs2S;r2 Rpðs ; rÞlog2pðs; rÞpðs ÞpðrÞ where pðs Þ¼Pr2Rpðs; rÞ and pðrÞ¼Ps2Spðs ; rÞ are themarginal distributions over the stimuli and responses,respectively.The easy way to use the MI is to test for significant asso-ciation between the two variables. Here the null hypothesisis that the two variables are independent. The distributionof the MI under the null hypothesis is (with appropriatescaling) that of a v2variable, leading to a significance testfor the presence of association (e.g. Sokal and Rohlf,1981; where it is call ed the G-statistic). Using the MI in thisway, only its size relative to the c ritical value of the test isof importance.0378-5955/$ - see front matter Ó 2007 Elsevier B.V. All rights reserved.doi:10.1016/j.heares.2007.01.012*Corresponding author. Tel.: +972 2 6584229; fax: +972 2 6586077.E-mail address: [email protected] (I. Nelken).www.elsevier.com/locate/hearesHearing Research 229 (2007) 94–105HearingResearchA more complicated way of using the MI is to try to esti-mate its actual value, in which case it is possible to makesubstantially deeper inferences regarding the relationshipsbetween the two variables. This estimation is substantiallymore difficult than performing the significance test. Thereasons to undertake this hard estimation problem, andthe associ ated difficulties, are the main subject of thispaper.2. Why mutual information?2.1. The Mutual Information as a measure of stimulus effectNeuronal responses are high-dimensional: to fully char-acterize in detail any single spiking response to a stimuluspresentation, it is necessary to specify many values, suchas the number of spikes that occurred during the relevantresponse window and their precise times. Similarly, mem-brane potential fluctuates at >1000 Hz, and therefore morethan 200 measurements are required to fully specify a100 ms response. We usually believe that most of the detailsin such representations are unimportant, and instead ofspecifying all of these values, typically a single value is usedto summarize single responses – for example, the totalspike count during the response window, or first spikelatency, or other such simple measures, that will be calledlater ‘reduced measures’ of the actual response.Having reduced the representation of the responses to asingle value, it is now possible to test whether the stimulihad an effect on the responses. Usually, the effect that istested is a dependence of the firing rate of the neuron onstimulus parameters. For example, to demonstrate fre-quency selectivity, we will look for changes in firing ratesof a neuron as a function of tone frequency.To understand what information-theoretic measures tellus about neuronal responses, let us consider the standar dmethods for performing such tests in detail. A test for a sig-nificant difference between means is really about comparingvariances (Fig. 1): the variation between response meanshas to be large enough with respect to variation betweenresponses to repeated presentations of the same stimulus.Initially, all the responses to all stimuli are pooledtogether, and the overall variability is estimated by the var-iance of this set of values around its gran d mean. Fig. 1shows the analysis of artificial data that represents 20repeats of each of two stimuli (these are actually samplesof two Poisson


View Full Document

MIT HST 722 - Information theory in auditory research

Documents in this Course
Load more
Download Information theory in auditory research
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Information theory in auditory research and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Information theory in auditory research 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?