UW AMATH 352 - Lecture 7: Conditioning of Problems, Stability of Algorithms

Unformatted text preview:

Lecture 7: Conditioning of Problems, Stability ofAlgorithmsAMath 352Mon., Apr. 121 / 11Conditioning of Problems, Stability of AlgorithmsErrors of many sorts in scientific computations:1. Replace physical problem by mathematical model.2. Replace mathematical model by one that is suitable fornumerical solution; e.g., truncate a Taylor series.3. Input data may come from inexact measurements.4. Rounding errors.We will deal with 2-4.Usually interested in relative error in computed value: |ˆy − y|/|y |.If answer is a vector, use norms: kˆx −~xk/k~xk.2 / 11Conditioning of Problems, Stability of AlgorithmsErrors of many sorts in scientific computations:1. Replace physical problem by mathematical model.2. Replace mathematical model by one that is suitable fornumerical solution; e.g., truncate a Taylor series.3. Input data may come from inexact measurements.4. Rounding errors.We will deal with 2-4.Usually interested in relative error in computed value: |ˆy − y|/|y |.If answer is a vector, use norms: kˆx −~xk/k~xk.2 / 11Conditioning of ProblemsWant to evaluate y = f (x). If ˆx is close to x, is ˆy := f (ˆx) close toy? [Note that this is a question about the problem, not about anyalgorithm used to solve the problem.]Absolute condition no.: |ˆy − y | ≈ C (x)|ˆx − x|.ˆy − y = f (ˆx) −f (x) ≈ (ˆx −x)f0(x) ⇒ C (x) = |f0(x)|.Relative condition no.:|ˆy−y ||y|≈ κ(x)|ˆx−x||x|.ˆy − yy=f (ˆx) − f (x)f (x)≈(ˆx −x)f0(x)f (x)=ˆx −xxxf0(x)f (x)⇒κ(x) =xf0(x)f (x).3 / 11Conditioning of ProblemsWant to evaluate y = f (x). If ˆx is close to x, is ˆy := f (ˆx) close toy? [Note that this is a question about the problem, not about anyalgorithm used to solve the problem.]Absolute condition no.: |ˆy − y | ≈ C (x)|ˆx − x|.ˆy − y = f (ˆx) −f (x) ≈ (ˆx −x)f0(x) ⇒ C (x) = |f0(x)|.Relative condition no.:|ˆy−y ||y|≈ κ(x)|ˆx−x||x|.ˆy − yy=f (ˆx) − f (x)f (x)≈(ˆx −x)f0(x)f (x)=ˆx −xxxf0(x)f (x)⇒κ(x) =xf0(x)f (x).3 / 11Examples1. f (x) = 2x, f0(x) = 2.C (x) = |f0(x)| = 2. Well-conditioned in absolute sense.κ(x) =xf0(x)f (x)=2x2x= 1. Well-conditioned in relativesense.2. f (x) =√x, f0(x) =12x−1/2, x > 0.C (x) =12x−1/2. Ill-conditioned in absolute sense nearx = 0. For example, if x = 10−16and ˆx = 1.21 · 10−16,then f (x) = 10−8and f (ˆx) = 1.1·10−8, so |f (ˆx)−f (x)| ≈12· 108|ˆx −x|.κ(x) =x·(1/2)x−1/2√x=12. Well-conditioned in relativesense.4 / 11Examples1. f (x) = 2x, f0(x) = 2.C (x) = |f0(x)| = 2. Well-conditioned in absolute sense.κ(x) =xf0(x)f (x)=2x2x= 1. Well-conditioned in relativesense.2. f (x) =√x, f0(x) =12x−1/2, x > 0.C (x) =12x−1/2. Ill-conditioned in absolute sense nearx = 0. For example, if x = 10−16and ˆx = 1.21 · 10−16,then f (x) = 10−8and f (ˆx) = 1.1·10−8, so |f (ˆx)−f (x)| ≈12· 108|ˆx −x|.κ(x) =x·(1/2)x−1/2√x=12. Well-conditioned in relativesense.4 / 11Examples3 f (x) = sin(x), f0(x) = cos(x).C (x) = |cos(x)| ≤ 1. Well-conditioned in absolutesense.κ(x) =x cos(x)sin(x)= |x cot(x)|. Ill-conditioned in relativesense if x is near ±π, ±2π, . . ..3 Solve A~y =~b, where~b and maybe A are input.kˆy −~ykk~yk≈ κ(~b)kˆb −~bkk~bk.5 / 11Examples3 f (x) = sin(x), f0(x) = cos(x).C (x) = |cos(x)| ≤ 1. Well-conditioned in absolutesense.κ(x) =x cos(x)sin(x)= |x cot(x)|. Ill-conditioned in relativesense if x is near ±π, ±2π, . . ..3 Solve A~y =~b, where~b and maybe A are input.kˆy −~ykk~yk≈ κ(~b)kˆb −~bkk~bk.5 / 11Stability of AlgorithmsSuppose we have a well-conditioned problem and an algorithm forsolving it. Will our algorithm give the answer to the expectedno. of places when implemented in finite precision arithmetic?Forward error analysis: How much does the computed valuediffer from the exact solution?Backward error analysis: Is the computed value the exactsolution to a nearby problem.If problem is ill-conditioned, cannot expect to get close tothe exact solution – the exact solution with rounded inputvalues might be very different from that with the true inputvalues. But if algorithm delivers the exact solution to aproblem with nearby input values, then this is the best onecan do.6 / 11Stability of AlgorithmsSuppose we have a well-conditioned problem and an algorithm forsolving it. Will our algorithm give the answer to the expectedno. of places when implemented in finite precision arithmetic?Forward error analysis: How much does the computed valuediffer from the exact solution?Backward error analysis: Is the computed value the exactsolution to a nearby problem.If problem is ill-conditioned, cannot expect to get close tothe exact solution – the exact solution with rounded inputvalues might be very different from that with the true inputvalues. But if algorithm delivers the exact solution to aproblem with nearby input values, then this is the best onecan do.6 / 11Stability of AlgorithmsSuppose we have a well-conditioned problem and an algorithm forsolving it. Will our algorithm give the answer to the expectedno. of places when implemented in finite precision arithmetic?Forward error analysis: How much does the computed valuediffer from the exact solution?Backward error analysis: Is the computed value the exactsolution to a nearby problem.If problem is ill-conditioned, cannot expect to get close tothe exact solution – the exact solution with rounded inputvalues might be very different from that with the true inputvalues. But if algorithm delivers the exact solution to aproblem with nearby input values, then this is the best onecan do.6 / 11Example of an Unstable AlgorithmCompute f (x) := exp(x) using Taylor series:ex= 1 + x +x22!+x33!+ . . . .This problem is well-conditioned in a relative sense, for anymoderate size x:κ(x) =xf0(x)f (x)= |x|.Matlab demo.7 / 11Example: Numerical Differentiationf0(x) ≈f (x + h) −f (x)h, for small h > 0.The truncation error in this approximation ish2f00(ξ), ξ ∈ (x, x + h).At best, we will computef (x + h)(1 + δ1) −f (x)(1 + δ2)h=f (x + h) − f (x)h+f (x + h)δ1− f (x)δ2h,|δ1,2| < . Error due to roundoff could be as large as(|f (x + h)| + |f (x)|)h.As h decreases, truncation error decreases, but roundoff errorgrows! Smallest total error when these two are about


View Full Document

UW AMATH 352 - Lecture 7: Conditioning of Problems, Stability of Algorithms

Download Lecture 7: Conditioning of Problems, Stability of Algorithms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 7: Conditioning of Problems, Stability of Algorithms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 7: Conditioning of Problems, Stability of Algorithms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?