Unformatted text preview:

12.215 Modern NavigationReview of last classToday’s ClassBasic StatisticsProbability descriptionsExample of random variablesHistograms of random variablesCharacterization Random VariablesTheorems for expectationsEstimation on momentsProbability distributionsProbability distributionsCentral Limit TheoremCovariance matricesProperties of covariance matricesPropagation of Variance-Covariance matricesApplicationsApplicationSummary12.215 Modern NavigationThomas Herring10/30/2006 12.215 Modern Naviation L12 2Review of last class• Map Projections:– Why projections are needed– Types of map projections• Classification by type of projection• Classification by characteristics of projection– Mathematics of map projections10/30/2006 12.215 Modern Naviation L12 3Today’s Class• Basic Statistics– Statistical description and parameters• Probability distributions• Descriptions: expectations, variances, moments• Covariances• Estimates of statistical parameters • Propagation of variances– Methods for determining the statistical parameters of quantities derived other statistical variables10/30/2006 12.215 Modern Naviation L12 4Basic Statistics• Concept behind statistical description: Some processes involve so many small physical effects that deterministic model is not possible.• For Example: Given a die, and knowing its original orientation before throwing; the lateral and rotational forces during throwing and the characteristics of the surface it falls on, it should bepossible to calculate its orientation at the end of the throw. In practice any small deviations in forces and interactions makes the result unpredictable and so we find that each face comes up 1/6th of the time. A probabilistic description.• In this case, any small imperfections in the die can make one face come up more often so that each probability is no longer 1/6th (but all outcomes must still add to a probability of 1)10/30/2006 12.215 Modern Naviation L12 5Probability descriptions• For discrete processes we can assign probabilities of specific events occurring but for continuous random variables, this is not possible• For continuous random variables, the most common description is a probability density function.• A probability density function gives the probability that a random variable x will have a value between x and x+dx• To find the probability of a random variable taking on a value between x1and x2, the density function is integrated between these two values.• Probability density functions can derived analytically for variables that are derived from other random variables with know probability density functions, or if the number of samples is large, then a histogram can be used to determine the density function (normally by fitting to a know class of density functions).10/30/2006 12.215 Modern Naviation L12 6Example of random variables-4.0-3.0-2.0-1.00.01.02.03.04.00.00 200.00 400.00 600.00 800.00UniformGaussianRandom variableSample10/30/2006 12.215 Modern Naviation L12 7Histograms of random variables0.050.0100.0150.0200.0-3.75 -2.75 -1.75 -0.75 0.25 1.25 2.25 3.25GaussianUniform490/sqrt(2pi)*exp(-x^2/2)Number of samplesRandom Variable x10/30/2006 12.215 Modern Naviation L12 8Characterization Random Variables• When the probability distribution is known, the following statistical descriptions are used for random variable x with density function f(x):Expected Value < h(x)> h(x)f(x)dx∫Expectation < x > xf (x)dx =μ∫Variance < (x −μ)2> (x −μ)2f (x)dx∫ Moments < (x −μ)n> (x −μ)nf (x)dx∫Square root of variance is called standard deviation10/30/2006 12.215 Modern Naviation L12 9Theorems for expectations• For linear operations, the following theorems are used:– For a constant <c> = c– Linear operator <cH(x)> = c<H(x)>– Summation <g+h> = <g>+<h>• Covariance: The relationship between random variables fxy(x,y) is joint probability distribution:σxy=< (x −μx)(y −μy) >= (x −μx)(y −μy) fxy(x,y)dxdy∫Correlation: ρxy=σxy/σxσy10/30/2006 12.215 Modern Naviation L12 10Estimation on moments• Expectation and variance are the first and second moments of a probability distribution• As N goes to infinity these expressions approach their expectations. (Note the N-1 in form which uses mean)ˆ μ x≈ xnn=1N∑/N ≈1Tx(t)dt∫ˆ σ x2≈ (x −μxn=1N∑)2/N ≈ (x −ˆ μ xn=1N∑)2/(N −1)10/30/2006 12.215 Modern Naviation L12 11Probability distributions• While there are many probability distributions there are only a couple that are common used:Gaussian f (x) =1σ2πe−(x−μ)2/(2σ2)Multivariant f (x) =1(2π)nVe−12(x−μ)TV−1(x−μ)Chi− squared χr2(x) =xr /2−1e−x /2Γ(r/2)2r /210/30/2006 12.215 Modern Naviation L12 12Probability distributions• The chi-squared distribution is the sum of the squares of r Gaussian random variables with expectation 0 and variance 1.• With the probability density function known, the probability of events occurring can be determined. For Gaussian distribution in 1-D; P(|x|<1σ) = 0.68; P(|x|<2σ) = 0.955; P(|x|<3σ) = 0.9974.• Conceptually, people think of standard deviations in terms of probability of events occurring (ie. 68% of values should be within 1-sigma).10/30/2006 12.215 Modern Naviation L12 13Central Limit Theorem• Why is Gaussian distribution so common?•“The distribution of the sum of a large number of independent, identically distributed random variables is approximately Gaussian”• When the random errors in measurements are made up of many small contributing random errors, their sum will be Gaussian.• Any linear operation on Gaussian distribution will generate another Gaussian. Not the case for other distributions which are derived by convolving the two density functions.10/30/2006 12.215 Modern Naviation L12 14Covariance matrices• For large systems of random variables (such as GPS range measurements, position estimates etc.), the variances and covariances are arranged in a matrix called the variance-covariance matrix (or simply covariance matrix). • This is the matrix used in multivariate Gaussian probability density function. C =σ12σ12Lσ1nσ12σ22Lσ2nMMOMσ1nσ2nLσn2⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ Notice that matrix is symmetric10/30/2006 12.215 Modern Naviation L12 15Properties of covariance matrices• Covariance matrices are symmetric• All of the diagonal elements are positive and usually non-zero since


View Full Document

MIT 12 215 - Modern Navigation

Download Modern Navigation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Modern Navigation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Modern Navigation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?