DOC PREVIEW
UNCW MAT 335 - 6.6

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

6 6.5 © 2012 Pearson Education, Inc. Orthogonality and Least Squares LEAST-SQUARES PROBLEMSSlide 6.5- 2 © 2012 Pearson Education, Inc. LEAST-SQUARES PROBLEMS  Definition: If A is and b is in , a least-squares solution of is an in such that for all x in .  The most important aspect of the least-squares problem is that no matter what x we select, the vector Ax will necessarily be in the column space, Col A.  So we seek an x that makes Ax the closest point in Col A to b. See the figure on the next slide. mnxbA ˆxˆb x b xAA  Slide 6.5- 3 © 2012 Pearson Education, Inc. LEAST-SQUARES PROBLEMS  Solution of the General Least-Squares Problem  Given A and b, apply the Best Approximation Theorem to the subspace Col A.  Let Col ˆb proj bASlide 6.5- 4 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Because is in the column space A, the equation is consistent, and there is an in such that ----(1)  Since is the closest point in Col A to b, a vector is a least-squares solution of if and only if satisfies (1).  Such an in is a list of weights that will build out of the columns of A. See the figure on the next slide. ˆbˆxbA ˆxˆˆxbA ˆbˆxxbA ˆxˆxˆbSlide 6.5- 5 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Suppose satisfies .  By the Orthogonal Decomposition Theorem, the projection has the property that is orthogonal to Col A, so is orthogonal to each column of A.  If aj is any column of A, then , and . ˆxˆˆxbA ˆbˆbbˆbxASlide 6.5- 6 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Since each is a row of AT, ----(2)  Thus  These calculations show that each least-squares solution of satisfies the equation ----(3)  The matrix equation (3) represents a system of equations called the normal equations for .  A solution of (3) is often denoted by . aTjˆ(b x) 0TAAˆb x 0ˆxbTTTTA A AA A AxbA xbTTA A AxbA ˆxSlide 6.5- 7 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Theorem 13: The set of least-squares solutions of coincides with the nonempty set of solutions of the normal equation .  Proof: The set of least-squares solutions is nonempty and each least-squares solution satisfies the normal equations.  Conversely, suppose satisfies .  Then satisfies (2), which shows that is orthogonal to the rows of AT and hence is orthogonal to the columns of A.  Since the columns of A span Col A, the vector is orthogonal to all of Col A. xbA xbTTA A AˆxˆxˆxbTTA A AˆxˆbxAˆbxASlide 6.5- 8 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Hence the equation is a decomposition of b into the sum of a vector in Col A and a vector orthogonal to Col A.  By the uniqueness of the orthogonal decomposition, must be the orthogonal projection of b onto Col A.  That is, and is a least-squares solution. ˆˆb x (b x)AA  ˆxAˆˆxbA ˆxSlide 6.5- 9 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Example 1: Find a least-squares solution of the inconsistent system for  Solution: To use normal equations (3), compute: xbA 4 0 20 2 ,b 01 1 11A               404 0 1 17 1020 2 1 1 511TAA         Slide 6.5- 10 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Then the equation becomes 24 0 1 19b00 2 1 1111TA         xbTTA A A1217 1 191 5 11xx         Slide 6.5- 11 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Row operations can be used to solve the system on the previous slide, but since ATA is invertible and , it is probably faster to compute and then solve as 221511()1 1784TAAxbTTA A A1ˆx ( ) b5 1 19 84 111=1 17 11 168 284 84TTA A A                     Slide 6.5- 12 © 2012 Pearson Education, Inc. SOLUTION OF THE GENREAL LEAST-SQUARES PROBLEM  Theorem 14: Let A be an matrix. The following statements are logically equivalent: a. The equation has a unique least-squares solution for each b in . b. The columns of A are linearly independent. c. The matrix ATA is invertible. When these statements are true, the least-squares solution is given by ----(4)  When a least-squares solution is used to produce as an approximation to b, the distance from b to is called the least-squares error of this approximation. mnxbA ˆx1ˆx ( ) bTTA A AˆxˆxAˆxASlide 6.5- 13 © 2012 Pearson Education, Inc. ALTERNATIVE CALCULATIONS OF LEAST-SQUARES SOLUTIONS  Example 2: Find a least-squares solution of for  Solution: Because the columns a1 and a2 of A are orthogonal, the orthogonal projection of b onto Col A is given by ----(5) xbA 1 6 11 2 2,b1 1 11 7 6A                  121 2 1 21 1 2 2b a b a 8 45ˆb a a a aa a a a 4 90   ggggSlide 6.5- 14 © 2012 Pearson Education, Inc. ALTERNATIVE CALCULATIONS OF LEAST-SQUARES SOLUTIONS  Now that is known, we can solve .  But this is trivial, since we already know weights to place on the columns of A to produce .  It is clear from (5) that 2 3 12 1 12 1/ 2 5/ 22 7 / 2 11/ 2                                ˆbˆˆxbA ˆb8/ 4 2ˆ45/ 90 1/ 2x         Slide 6.5- 15 © 2012 Pearson Education,


View Full Document

UNCW MAT 335 - 6.6

Download 6.6
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view 6.6 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view 6.6 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?