DOC PREVIEW
Berkeley MATH 128A - Fail Mode

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

File: FailMode version dated December 1, 2008 6:27 pmProf. W. Kahan for Math. 221 & Math. 128B Page 1/8Do MATLAB’s lu(…), inv(…), / and \ have Failure Modes ?These exercises are designed to exhibit and explain the perverse behavior of MATLAB’s inverse-related operations lu(…), inv(…), / and \ upon certain matrices. This perverse behavior occurs only very rarely and afflicts other matrix-handling software like LINPACK and LAPACK too. As three examples will illustrate, the behavior is perverse for three reasons:• It causes gross inaccuracy which could often be avoided by altering algorithms slightly.• It can happen to otherwise unexceptionable matrices about which nothing is wrong.• The gross inaccuracy may be merely an artifact of the norm by which it is assessed.The last point means that gross inaccuracy may go away, often with no change to computed results, when the norm, by which variations are deemed “negligible” or not, is replaced by another equally plausible norm. This compounds perversity with perplexity.A First Example of FailureA parameter h is given accurate to at least ten sig. dec.; say h := 1000000.0000 . Then matrices A := and b := and later Y := are constructed, and the linear system A·z = b is to be solved for z at least about as accurately as it is determined by the given data. One approximation to z is MATLAB’s x = A\b : we may expect it to be accurate enough since MATLAB carries 53 sig. bits, worth more than fifteen sig. dec., and we need only ten. However, just to check up on the computation’s accuracy, we also compute matrices YAY := Y·A·Y and Yb := Y·b and then solve YAY·vx = Yb for vx by computing MATLAB’s vx = YAY\Yb , from which Yvx := Y·vx is obtained. Ideally we expect Yvx ≈ x ≈ z to within a few rounding errors, but something else happened on my Macintosh: x = differs from Yvx = in the worst way,by too little to be obvious but by too much to tolerate. What results do you get? Which, if either, is correct? You can solve A·z = b in your head. Why did MATLAB get it wrong?Iterative Refinement is a way to improve (usually) the computed solution x of an equation. First compute a residual r := b – A·x as accurately as you can at a tolerable price; then re-use the triangular factorization of A that occurred during the solution of A·x = b to solve A·∆x = r for ∆x , though this too will be computed only approximately. Finally replace x by x + ∆x to enhance its accuracy. The process may be iterated (repeated) until the Law of Diminishing Returns renders further iteration futile. Yvx did not change, but MATLAB changed x to first x = and subsequently x = .What results do you get on your computer? How do you explain them? What if h = 100000000 ?Hereunder is the script that delivered the foregoing results in MATLAB 5.2 on my old Macintosh:2– 1 11 h2–h2–1 h2–000h1–h1–0 00 h 00 0 h01000022.12220951000022.1222095–010000001000000–010000001000000–01000000.000000011000000.00000001–File: FailMode version dated December 1, 2008 6:27 pmProf. W. Kahan for Math. 221 & Math. 128B Page 2/8% Matlab Script badscale.m demonstrates how much% scaling a linear system can affect computed results.h = 1000000.0000 % ... the parameter,A = [-2,1,1; 1, h^-2, h^-2; 1, h^-2, 0]b = [0; 0; 1/h] % ... b = A*z ; solve for z .x = A\b % ... approximates the solution z .Y = diag([1/h, h, h])YAY = Y*A*YYb = Y*b % ... scaled system Yb = YAY*vz .vx = YAY\Yb % ... approximates scaled solution vz .x_Yvx = [x, Y*vx] % ... compare solutions.% Iterative refinement to improve accuracies:r = [b, A]*[1; -x] % ... r = b - A*x but computed% more accurately on a Mac Quadra 950 in Matlab 5.2.dx = A\r % ... correction to x .Yr = [Yb, YAY]*[1; -vx]dvx = YAY\Yrx = x+dx % ... updated x .vx = vx + dvx % ... updated scaled x .x_Yvx = [x, Y*vx] % ... compare solutions again.r = [b, A]*[1; -x] % ... iterate refinement.dx = A\rx = x + dxx_Yvx = [x, Y*vx] % ... compare solutions again.This script doesn’t mention MATLAB’s norm(..., p) for p = 1, 2 or ∞. Denote one by ||…|| .Diagonal scaling that changes A to YAY can be interpreted as a change of norm from ||x|| to [[x]] := ||Y–1·x|| for vectors x in the domain of A which, since both A and YAY are symmetric, is best regarded as the matrix of a linear map from column vectors x to the Dual-space of rows w' = (A·x)' . Dual-spaces and norms are explained at length in course notes posted at <www.cs.berkeley.edu/~wkahan/MathH110/NORMlite.pdf> . The dual of column-norm ||x|| is row-norm ||w'|| := maxx≠o |w'·x|/||x|| ; the dual of [[x]] is [[w']] := maxx≠o |w'·x|/[[x]] = ||w'·Y|| . The Operator Norm ||A|| := maxx≠o ||(A·x)'||/||x|| is induced by ||x|| ; and similarly [[A]] := maxx≠o [[(A·x)']]/[[x]] = maxx≠o ||(A·x)'·Y||/||Y–1·x|| = ||YAY|| . Now observe that A is far farther from singular when perturbations are gauged by [[…]] than by ||…|| .Is [[…]] more appropriate than ||…|| to gauge perturbations in our data? If so, our data [A, b] will be misconstrued as Ill-Conditioned by MATLAB, whose programmers had ||…|| in mind. One way to diminish the necessity for mind-reading is to employ iterative refinement routinely.In general, the numerical solution of a system B·z = c of linear equations is influenced implicitly by the choices of three norms: one for matrices like B , one for columns like c in the target space of B , and one for columns like z in the domain of B . All are denoted by the overloaded symbol ||…|| though they may be very different. Scaling that replaces B by, say, T–1·B·Y can be reinterpreted as changing those norms to ||T–1·B·Y|| , ||T–1·c|| and ||Y–1·z|| respectively. Iterative refinement is a way to diminish the generally unknown influences of the norms’ choices.


View Full Document
Download Fail Mode
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Fail Mode and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Fail Mode 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?