Unformatted text preview:

1Improved characterization of neural and behavioral response properties using point-processproperties using point-process state-space frameworkAnna Alexandra DreyerHarvard-MIT Division of Health Sciences and TechnologySpeech and Hearing Bioscience and Technology ProgramNeurostatistics Research Laboratory, MITPI: Emery Brown, M.D., Ph.D.September 27, 2007Action potentials as binary events• Action potentials (spikes) are binary events• Cells using timing and frequency of action potentials to communicate with neighboring cells• Most cell emit action potentials spontaneously in the absence of stimulationFigure from laboratory of Mark Unglessof stimulation• Models should begin with spikes to most accurately describe the response2Point Process Framework: Definition of Conditional Intensity Function• Given a recording interval of [0,T), the counting process N(t)represents the number of spikes that have occurred ()ppon the interval [0,t).• A model can be completely characterized by the conditional intensity function (CIF) that defines the instantaneous firing rate at every point in time as:=−Δ+)](|1)()(Pr[limtHtNtN• where H(t) represents the autoregressive history until time t.ΔΔ+=→Δ)](|1)()(Pr[)|(lim0tHtNtNHtλBrown et al., 2003; Daley and Vere-Jones, 2003; Brown, 2005 Joint Probability of Spiking (Likelihood function)• Discretize time on duration [0,T) into B intervals.AΔbiilllλ(|ΨH)hΨ•As Δbecomes increasingly small λ(tb|Ψ,Hk), where Ψare parameters and Hb is the autoregressive history up to bin b, approaches the probability of seeing one event in the binwidth of Δ. • If we select a sufficiently small binwidth ,Δ, such that the probability of seeing more than one event in this binwidth approaches 0, the joint probability can be written as the product of Bernoulli independent events (Truccolo, et al., 2005):•where o(ΔJ) represents the probability of seeing two or more events on the interval (tb-1,tb]. 11:1Pr[ | ] ( ( | , ) ) (1 ( | , ) ) ( )bbBNNJBbb bbbNtHtHoψλψ λψ−==Δ−Δ+Δ∏Truccolo et al., 20053An example of using PP models to analyze auditory data: Experimental Paradigm• Recordings of action potentials to 19 stimulus levels for multiple repetitionsstimulus levels for multiple repetitions of the stimulus• Need to develop encoding model to characterize responses to each stimulus level as well as the noise in the system• Inference: find the lowest stimulus level for which the response is more than system noisethan system noise• Given new responses from the same cell, need to decode the stimulus from which response originatedData from Lim and Anderson (2006)Modeling example of cortical response across stimulus levels • Response characteristics includeAutoregressive components–Autoregressive components– Temporal and rate-dependent elements• To have adequate goodness-of-fit and predictive power, must capture these elements from raw dataDoes NOTcaptureTypical autoregressive components4Point process state space framework•Instantaneous Firing Intensity Model:Instantaneous Firing Intensity Model:– The firing intensity in each Δ=1ms bin, b, is modeled as a function of the past spiking history, Hl,k,bΔand of the effect of the stimulus – Observation Equation,,, , ,,11(|,, )exp ()expRJlk l lkb lr r j lkb jrjbH gb nλθγ θ γΔΔ−==⎛⎞⎛⎞Δ=Δ⎜⎟⎜⎟⎝⎠⎝⎠∑∑– State equation1, , 1,lr lr lrθθε++=+Conditional firing intensity Stimulus effect Past spiking history effectwhere εl+1,ris a Gaussian random vectorComputational methods developed with G Czanner, U Eden, E BrownEncoding and Decoding Methodology• Estimation/Encoding/InferenceThe E pectationMa imi ation algorithm sed–The Expectation-Maximization algorithm used– Monte Carlo techniques to estimate confidence bounds for stimulus effect• Goodness-of-fitKS and autocorrelation of rescaled times–KS and autocorrelation of rescaled times• Decoding and response property inference5Expectation-Maximization algorithm• Used in computing maximum likelihood (ML) parameters in statistical models with hidden variables or missing data. • The algorithm consists of two steps– expectation (E) step where the expectation of the complete likelihood is estimated– maximization (M) step when the maximum likelihood of the expectation is taken.• As the algorithm progresses, the initial estimates of the parameters are improved by taking iterations until the estimate converges on the maximum likelihood estimator. Dempster et al., 1977; McLachlan and Krishnan, 1997; Pawitan, 2001SS-GLM model of stimulus effect•Level dependent stimulusLevel dependent stimulus effect captures many phenomena seen in data– Increase of spiking with level– Spread of excitation in time• Removes the effect of Stimulus Effect (spikes/s)autoregressive history which is system (not stimulus dependent) propertyLevel NumberTime since stimulus onset (ms)S6Threshold inference based on all trials and all levels• Define threshold as the first stimulus level for which we can be reasonably (>0 95) certainbe reasonably (>0.95) certain that the response at that level is different from the noise and continues to differ for higher stimulus levels• For this example, we define threshold as level 8 Compare to common•Compare to common methodology of rate-level function (level 11)Dreyer et al., 2007; Czanner et al., 2007Goodness-of-fit assessment• The KS plot fits close to the 45 degree line indicating uniformity of rescaled spikeuniformity of rescaled spike times• The autocorrelation plot implies that Gaussian rescaled spike times are relatively uncorrelated, implying independence.• In contrast, the KS plots for the underlying rate-based models provide a very poor fit to the dataJohnson & Kotz, 1970; Brown et al, 2002; Box et al., 19947Decoding based on a single trial• Decoding of new data based on encoding parameters^^^^0(,,)ψθγ=∑• Given a spike train, estimate the likelihood that the spike train, nl*, came from any stimulus, sl’, in our encoding model• Calculate the likelihood for all stimuli, s1:L*^^1'''1(| ) [( | , )][1 ( | , )]bbBnnlllb blbLik s n b H b Hλψ λψ−ΔΔ==Δ Δ−Δ Δ∏()ψγ• Take the most likely level as the decoded stimulus *** *1: 1 2( | ) [ ( | ) ( | ) ... ( | )]TLLlll lLik s n Lik s n Lik s n Lik s n=**1: 1:(|)max( (|))TTMAX L LllLik s n Lik s n=Single-trial threshold inference using decoding based on ML more sensitive


View Full Document

MIT HST 722 - Lecture Slides

Documents in this Course
Load more
Download Lecture Slides
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Slides and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Slides 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?