DOC PREVIEW
MIT 18 02 - Least Squares Interpolation

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MIT OpenCourseWare http://ocw.mit.edu 18.02 Multivariable CalculusFall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.LS. Least Squares Interpolation 1. The least-squares line. Suppose you have a large number n of experimentally determined points, through which you want to pass a curve. There is a formula (the Lagrange interpolation formula) producing a polynomial curve of degree n -1which goes through the points exactly. But normally one wants to find a simple curve, like a line, parabola, or exponential, which goes approximately through the points, rather than a high-degree polynomial which goes exactly through them. The reason is that the location of the points is to some extent determined by experimental error, so one wants a smooth-looking curve which averages out these errors, not a wiggly polynomial which takes them seriously. In this section, we consider the most common case -finding a line which goes approximately through a set of data points. I . Suppose the data points are and we want to find the line which "best" passes through them. Assuming our errors in measurement are distributed randomly according to the usual bell-shaped curve (the so-called "Gaussian distribution"), it can be shown that the right choice of a and b is the one for which the sum D of the squares of the deviations I i= l is a minimum. In the formula (2), the quantities in parentheses (shown by dotted lines in the picture) are the deviations between the observed values -yi and the ones axi + b that would be predicted using the line (1). The deviations are squared for theoretical reasons connected with the assumed Gaussian error distribution; note however that the effect is to ensure that we sum only positive quantities; this is important, since we do not want deviations of opposite sign to cancel each other out. It also weights more heavily the larger deviations, keeping experimenters honest, since they tend to ignore large deviations ("I had a headache that day"). This prescription for finding the line (1) is called the method of least squares, and the resulting line (1) is called the least-squares line or the regression line. To calculate the values of a and b which make D a minimum, we see where the two partial derivatives are zero:2 18.02 NOTES These give us a pair of linear equations for determining a and b, as we see by collecting terms and cancelling the 2's: (Notice that it saves a lot of work to differentiate (2) using the chain rule, rather than first expanding out the squares.) The equations (4) are usually divided by n to make them more expressive: where Z and are the average of the xi and yi, and % = C xp/n is the average of the squares. From this point on use linear algebra to determine a and b. It is a good exercise to see that the equations are always solvable unless all the xi are the same (in which case the best line is vertical and can't be written in the form (1)). In practice, least-squares lines are found by pressing a calculator button, or giving a MatLab command. Examples of calculating a least-squares line are in the exercises in your book and these notes. Do them from scratch, starting from (2), since the purpose here is to get practice with max-min problems in several variables; don't plug into the equations (5). Remember to differentiate (2) using the chain rule; don't expand out the squares, which leads to messy algebra and highly probable error. 2. Fitting curves by least squares. If the experimental points seem to follow a curve rather than a line, it might make more sense to try to fit a second-degree polynomial to them. If there are only three points, we can do this exactly (by the Lagrange interpolation formula). For more points, however, we once again seek the values of ao,al, a2 for which the sum of the squares of the deviations is a minimum. Now there are three unknowns, ao, al, a2. Calculating (remember to use the chain rule!) the three partial derivatives dD/dai, i = 0,1,2, and setting them equal to zero leads to a square system of three linear equations; the ai are the three unknowns, and the coefficients depend on the data points (xi, yi). They can be solved by finding the inverse matrix, elimination, or using a calculator or MatLab. If the points seem to lie more and more along a line as x + m, but lie on one side of the line for low values of x, it might be reasonable to try a function which has similar behavior, likeLS. LEAST SQUARES INTERPOLATION 3 and again minimize the sum of the squares of the deviations, as in (7). In general, this method of least squares applies to a trial expression of the form where the fi(x) are given functions (usually simple ones like 1,x, x2, l/x, ekx, etc. Such an expression (9) is called a linear combination of the functions fi(x). The method produces a square inhomogeneous system of linear equations in the unknowns ao,. . . ,a, which can be solved by finding the inverse matrix to the system, or by elimination. The method also applies to finding a linear function to fit a set of data points where there are two independent variables x and y and a dependent variable z (this is the quantity being experimentally measured, for different values of (x, y)). This time after differentiation we get a 3 x 3 system of linear equations for determining all a2, a3 . The essential point in all this is that the unknown coefficients ai should occur linearly in the trial function. Try fitting a function like cekx to data points by using least squares, and you'll see the difficulty right away. (Since this is an important problem -fitting an exponential to data points -one of the Exercises explains how to adapt the method to this type of problem.) Exercises: Section


View Full Document

MIT 18 02 - Least Squares Interpolation

Documents in this Course
Vectors

Vectors

1 pages

Exam 1

Exam 1

2 pages

Load more
Download Least Squares Interpolation
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Least Squares Interpolation and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Least Squares Interpolation 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?