MIT 18 086 - The Fundamentals and Advantages of Multi-grid Techniques

Unformatted text preview:

The Fundamentals and Advantages of Multi-grid Techniques Introduction Basic Theory of the Jacobi Relaxation Method Downsampling Upsampling Numerical Experiments in 2-D Conclusion ReferencesJoseph Kovac 18.086 Final Project Spring 2005 Prof. Gilbert Strang The Fundamentals and Advantages of Multi-grid Techniques Introduction The finite difference method represents a highly straightforward and logical approach to approximating continuous problems using discrete methods. At its heart is a simple idea: substitute finite, discrete differences for derivatives in some way appropriate for a given problem, make the time and space steps the right size, run the difference method, and get an approximation of the answer. Many of these finite difference methods can ultimately be written in a matrix form, with a finite difference matrix multiplying a vector of unknowns to equal a known quantity or source term. In this paper, we will be examining the problem Au=f, where A represents a finite difference matrix operating on u, a vector of unknowns, and f represents a time-independent vector of source terms. While this is a general problem, we will specifically examine the case where A is the finite difference approximation to the centered second derivative. We will examine solutions arising when f is zero (Laplace’s equation) and when it is nonzero (Poisson’s equation). The discussion would be quite straightforward if we wanted it to be; to find u, we would simply need to multiply both sides of the equation by A-1, explicitly finding u= A-1f. While straightforward, this method becomes highly impractical as the mesh becomes fine and A becomes large, requiring inversion of an impractically large matrix. This is especially true for the 2D and 3D finite difference matrices, whose dimensions grow as the square and cube of the length of one edge of the square grid. It is for this reason that relaxation methods became both popular and necessary. Many times in engineering applications, getting the exact answer is not necessary; getting the answer right to within a certain percentage of the actual answer is often good enough. To this end, relaxation methods allow us to take steps toward the right answer. The advantage here is that we can take a few iterations toward the answer, see if the answer is good enough, and if it is not, iterate until it is. Oftentimes, using such an approach, getting an answer “good enough” could be done with orders of magnitude less time and computational energy than with an exact method. However, relaxation methods are not without their tradeoffs. As will be shown, the error between the actual answer and the last iteration’s answer ultimately will decay to zero. However, not all frequency components of the error will get to zero at the same rate. Some error modes will get there faster than others. What we seek is to make all the error components get to zero as fast as possible by compensating for this difference in decay rates. This is the essence of multi-grid; multi-grid seeks to allow the error modes of the solution to decay as quickly as possible by changing the resolution of the grid to let the error decay properties of the grid be an advantage rather than a liability. 1Basic Theory of the Jacobi Relaxation Method Before going into the theory of the method, I first want to state that much of the following closely comes from an explanation in A Multi-grid Tutorial by William Briggs et al. This text explained the material as clearly and concisely as one could hope for. To a large extent, much of the “theory section” following will be reiteration of their explanation, but with emphasis on concepts which will be validated in the numerical experiments later. In no way do I claim these derivations as my own. The following is a derivation of the Jacobi method in matrix form, which is the relaxation method which will be used for the rest of the paper. We can first express the matrix A as a sum of its diagonal component D and lower and upper triangular components L and U: ULD++=A(1) so fuULD=++)((2) We can move the upper and lower triangular parts to the right side: fuULDu++−=)((3) We can then multiply both sides by D-1: ))((1fuULDu ++−=−(4) We can define )(1ULDRJ+−=−(5) Therefore, we have defined the iteration in matrix form, and can write, in the notation of Briggs’s chapter in Multi-grid Methods: fDuRuJ1)0()1( −+=(6) Weighed Jacobi takes a fraction of the previous iteration and adds it to a fraction of the previous iteration with the Jacobi iteration applied: fDuRIuJ1)0()1(])1[(−++−=ωωω(7) We can rewrite the above as fDuRu1)0()1( −+=ωω(8) 2where ])1[(JRIRωωω+−=(9) This iteration and its characteristics on the grid is the focus of this paper. Before attempting to implement this method, it is first good to predict the behavior we expect to see theoretically. One way to do this is to look at the eigenvalues of the matrix Rω. The following again stems from Briggs, but some of the following was not explicitly explained and left as an “exercise” in the text. We first note that, by the properties of eigenvalues and by eq. 9, RJRωλωλω+−=)1( (10) Therefore, we first need to find λRJ. We observe that: IAUL 2−=+(11) Therefore, 2−=+ AULλλ(12) Noting that, for the 1D case, ID211=−(13) So, using eq. 5 and properties of eigenvalues, 12)2(21+−=−−=AARJλλλ(14) Therefore, remembering eq. 10, 21ARωλλω−=(15) The kth eigenvalue of the matrix A is: 11),2(sin4)(2−≤≤= nknkAkπλ(16) So, by eq. 15, the eigenvalues λω are: 311,2sin21)(2−≤≤⎟⎠⎞⎜⎝⎛−= nknkRkπωλω(17) And the jth component of the kth eigenvector is: njnknjkjk≤≤−≤≤⎟⎠⎞⎜⎝⎛= 0,11,sin,πω(18) We can make two quick observations here. First, for ω between 0 and 1, the eigenvalues will always lie between -1 and 1, implying stability to the iteration. Second, we remember that all vectors in the space of the matrix A can be represented as a weighed sum of the eigenvectors: ∑−==11)0(nkkkcuω(19) In this case, since the eigenvectors are Fourier modes, there is an additional useful interpretation of the weighting coefficients ck of the linear combination of eigenvectors; these are analogous to the Fourier series coefficients in a periodic replication of the vector u. The other key point to see here is that varying the value of ω


View Full Document

MIT 18 086 - The Fundamentals and Advantages of Multi-grid Techniques

Download The Fundamentals and Advantages of Multi-grid Techniques
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view The Fundamentals and Advantages of Multi-grid Techniques and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view The Fundamentals and Advantages of Multi-grid Techniques 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?