Unformatted text preview:

18.303 notes on finite differences S. G. Johnson September 12, 2010 The most basic way to approximate a derivative on a computer is by a difference. In fact, you probably learned the definition of a derivative as being the limit of a difference: u�(x) = lim u(x + Δx) − u(x) . Δx 0 Δx →To get an approximation, all we have to do is to remove the limit, instead using a small but non-infinitesimal Δx. In fact, there are at least three obvious variations (these are not the only possibili-ties) of such a difference formula: u�(x) ≈ u(x + Δx) − u(x) forward difference Δx ≈ u(x) − Δu(xx − Δx) backward difference u(x + Δx) − u(x − Δx) center difference,≈ 2Δx with all three of course being equivalent in the Δx →0 limit (assuming a continuous derivative). Viewed as a numerical method, the key questions are: 10−810−710−610−510−410−310−210−110010−1810−1610−1410−1210−1010−810−610−410−2100∆x|error| in derivative forwardbackwardcentersin(1) ∆x/2cos(1) ∆x2/6• How big is the error from a nonzero Δx? Figure 1: Error in forward- (blue circles), backward-(red stars), and center-difference (green squares) ap-How fast does the error vanish as Δx 0? proximations for the derivative u�(1) of u(x) = sin(x).• → How do the answers depend on the difference ap-Also plotted are the predicted errors (dotted and solid • lines) from a Taylor-series analysis. Note that, for proximation, and how can we analyze and design small Δx, the center-difference accuracy ceases to de-these approximations? cline because rounding errors dominate (15–16 signif-Let’s try these for a simple example: u(x) = sin(x), icant digits for standard double precision). taking the derivative at x = 1 for a variety of Δx val- ues using each of the three difference formulas above. The exact derivative, of course, is u�(1) = cos(1), so we will compute the error |approximation − cos(1)| versus Δx. This can be done in Matlab with the following commands: 1x = 1; dx = logspace(-8,-1,50); f = (sin(x+dx) - sin(x)) ./ dx; b = (sin(x) - sin(x-dx)) ./ dx; c = (sin(x+dx) - sin(x-dx)) ./ (2*dx); loglog(dx, abs(cos(x) - [f;b;c]), ’o’) legend(’forward’, ’backward’, ’center’) xlabel(’{\Delta}x’) ylabel(’|error| in derivative’) The resulting plot, beautified a bit with a few addi-tional tweaks beyond those above, is shown in Fig-ure 1. The obvious conclusion is that the forward-and backward-difference approximations are about the same, but that center differences are dramatically more accurate—not only is the absolute value of the error smaller for the center differences, but the rate at which it goes to zero with Δx is also qualitatively faster. Since this is a log–log plot, a straight line cor-responds to a power law, and the forward/backward-difference errors shrink proportional to ∼ Δx, while the center-difference errors shrink proportional to ∼ Δx2! For very small Δx, the error appears to go crazy—what you are seeing here is simply the effect of roundoff errors, which take over at this point be-cause the computer rounds every operation to about 15–16 decimal digits. We can understand this completely by analyzing the differences via Taylor expansions of u(x). Recall that, for small Δx, we have Δx2 Δx3 For the forward and backward differences, the error in the difference approximation is dominated by the u��(x) term in the Taylor series, which leads to an error that (for small Δx) scales linearly with Δx. For the center-difference formula, however, the u��(x) term cancelled in u(x + Δx) − u(x − Δx), leaving us with an error dominated by the u���(x) term, which scales as Δx2 . In fact, we can even quantitatively predict the er-ror magnitude: it should be about sin(1)Δx/2 for the forward and backward differences, and about cos(1)Δx2/6 for the center differences. Precisely these predictions are shown as dotted and solid lines, respectively, in Figure 1, and match the computed er-rors almost exactly, until rounding errors take over. Of course, these are not the only possible difference approximations. If the center difference is devised so as to exactly cancel the u��(x) term, why not also add in additional terms to cancel the u���(x) term? Pre-cisely this strategy can be pursued to obtain higher-order difference approximations, at the cost of mak-ing the differences more expensive to compute [more u(x) terms]. Besides computational expense, there are several other considerations that can limit one in practice. Most notably, practical PDE problems of-ten contain discontinuities (e.g. think of heat flow or waves with two or more materials), and in the face of these discontinuities the Taylor-series approxima-tion is no longer correct, breaking the prediction of high-order accuracy in finite differences. u(x+Δx) ≈ u(x)+Δx u�(x)+ 2 u��(x)+ 3! u���(x)+· · · . Δx2 Δx3 u(x−Δx) ≈ u(x)−Δx u�(x)+ 2 u��(x)− 3! u���(x)+· · · . If we plug this into the difference formulas, after some algebra we find: Δx Δx2 forward difference ≈ u�(x)+ 2 u��(x)+ 3! u���(x)+· · · , Δx Δx2 backward difference ≈ u�(x)− 2 u��(x)+ 3! u���(x)+· · · , Δx2 center difference ≈ u�(x) + 3! u���(x) + · · · . 2MIT OpenCourseWare http://ocw.mit.edu 18.303 Linear Partial Differential Equations: Analysis and Numerics Fall 2010 For information about citing these materials or our Terms of Use, visit:


View Full Document
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?