DOC PREVIEW
MIT 16 412J - GPS Integrity Monitoring

This preview shows page 1-2-3-4 out of 12 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 12 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

GPS Integrity Monitoring Tom Temple May 10 2005 Abstract This paper seeks to explore the potential of evolutionary algorithms for determining hard error bounds required for the use of GPS in safety of life circumstances 1 Introduction The Global Positioning System GPS has proved to be very useful in a number of applications Aircraft navigation is among the most important But in the safety of life circumstance of precision approach and landing we do not have sufficient guarantees of accuracy While differential GPS DGPS is precise enough much of the time a controller needs to be able to put strict error bounds on the position estimate in order to be able to use DGPS for precision approach or landing This paper explores the use of an evolutionary algorithm to determine these error bars 2 Problem Specification Rather than solve the Is it safe to land or not question I will attempt to answer a slightly more general problem What is the worst Lincoln Laboratory 1 Figure 1 The 2 dimensional projection of the distribution of positions calculated from subsets of the pseudo ranges The green triangle is the estimate using all of the pseudo ranges and the red x is the true position The axis are in meters that the error could be Given a set of satellite ranges we would like to estimate an upper bound on position error We are allowed to underestimate the bound no more than once in 10 million trials Simultaneously one would want the service to be available as much of the time as possible In other words we would like to keep overestimation to a minimum while maintaining the strict rule on underestimation As the number of satellite ranges available increases the set of equations that the receiver must solve become increasingly over specified The goal is to use this over specification to generate a number of partially independent estimates of position The distribution of these estimates can be used to estimate the distribution from which the position measurement using all satellites was selected Figure 1 shows the two dimensional projection of the distribution of twenty subset positions As one might expect the true position lies within this distribution Given such a distribution the goal is to draw the smallest circle that surely contains the true position 2 3 Previous Work I am primarily building on work by Misra and Bednarz 3 They proposed an algorithm called LGG1 which consists of three elements A method of selecting subsets of satellites with good geometries A characterization of the distribution of subset positions And a rule function that turns this distribution into an errorbound They demonstrated that such an algorithm if sufficiently conservative could give an error bound that was sufficiently stringent If the rule function was a linear function the error bound was sufficiently stringent regardless of the underlying range error distribution The recent subject of my research has been testing and improving the algorithm I retain all three elements of LGG1 algorithm but will change each of them For this work I will be focusing on determining the rule function I will explore using an evolutionary algorithm to determine a rule function that exploits more expressive characterizations of the subset position distributions The original algorithm used a single metric to quantify the distribution of subset positions The number that it used was the distance between the furthest two positions and they called it scatter The rule was a constant times this scatter Such a rule could be found quickly given a dataset with millions of error scatter pairs 4 Current Extention The current work seeks to explore using a more expressive description of the subset position distribution We would like to utilize the fact that there is much more information in the distribution than the scatter metric So rather than characterize the distribution with one 3 summary number I use a set of what I am going to call p scatters computed as follows Sp P x x p n 1 p You will note that S2 is the standard deviation and S is similar to the metric proposed by Misra A value p need not be an integer nor be positive In this paper the values of p that are used are chosen by hand Choosing the best values of p is a problem with which I am still grappling Future work will tell whether a learning algorithm can be applied to determining which values are the most telling of the distribution One property desirable in a rule function is that it be scale independent If all the errors double the scatter should double and the error bound should also double A quick check reveals that the p scatters have this property So will the rule function as long as it is a linear combination of the p scatters This means our rule function can be represented as a vector of weights We have reduced the problem to a parameter estimation problem 4 1 Parameter Estimation The main idea is that the subset position distribution is somehow similar to the distribution from which the estimate is sampled If we can determine how the distributions are related we can use the position distribution to estimate the error distribution and determine an error bound If the errors were truly Gaussian we could simply estimate the standard deviation S2 and simply multiply it by six and call it a day Alas they are not and it will not be so easy In general parameter estimation problems consist of finding optimal values for the parameters Optimal values can be defined as those that maximize or minimize some function for instance minimize the squared error or maximize the margin of classification The sense of optimal in this case is more tricky Our value function on error estimate error is very non linear To the negative side of zero 4 Figure 2 The cost associated with making errors in our error boundestimate it has a value that is 10 million times the value on the positive side see Figure 2 Given a set of p scatters consider the space of p scatters with one additional dimension namely error length as the vertical axis Now imagine that we plot 10 million data points in this space The current problem is the same as finding the best hyperplane such that all the points fall bellow this plane If we demand that this hyperplane with normal n pass through the origin as we must if our rule is going to be linear then we could define best to mean the one for which X d scatter n d data is minimized In high dimensional scatter spaces and with millions of data points there is no tractable way to find


View Full Document

MIT 16 412J - GPS Integrity Monitoring

Documents in this Course
Load more
Download GPS Integrity Monitoring
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view GPS Integrity Monitoring and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view GPS Integrity Monitoring 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?