DOC PREVIEW
MIT 18 086 - Solving Large Linear Systems

This preview shows page 1-2-3-4-5-6 out of 18 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

192 CHAPTER 3. BOUNDARY VALUE PROBLEMS 3.6 Solving Large Linear Systems Finite elements and finite differences produce large linear systems KU = F . The matrices K are extremely sparse. They have only a small number of nonzero entries in a typical row. In “physical space” those nonzeros are clustered tightly together—they come from neighboring nodes and meshpoints. But we cannot number N2 nodes in a plane in any way that keeps neighbors close together! So in 2-dimensional problems, and even more in 3-dimensional problems, we meet three questions right away: 1. How best to number the nodes 2. How to use the sparseness of K (when nonzeros can be widely separated) 3. Whether to choose direct elimination or an iterative method. That last point will split this section into two parts—elimination methods in 2D (where node order is important) and iterative methods in 3D (where preconditioning is crucial). To fix ideas, we will create the n equations KU = F from Laplace’s difference equation in an interval, a square, and a cube. With N unknowns in each direction, K has order n = N or N2 or N3 . There are 3 or 5 or 7 nonzeros in a typical row of the matrix. Second differences in 1D, 2D, and 3D are shown in Figure 3.17. −1 −1Block −1 Tridiagonal K Tridiagonal 4 K6 −12 −1 −1 −1 −1 −1 N2 by N2 N by N −1 N3 by N3 −1 −1 Figure 3.17: 3, 5, 7 point difference molecules for −uxx, −uxx − uyy , −uxx − uyy − uzz . Along a typical row of the matrix, the entries add to zero. In two dimensions this is 4 − 1 − 1 − 1 − 1 = 0. This “zero sum” remains true for finite elements (the element shapes decide the exact numerical entries). It reflects the fact that u =1 solves Laplace’s equation and Ui = 1 has differences equal to zero. The constant vector solves KU =0 except near the boundaries. When a neighbor is a boundary point where Ui is known, its value moves onto the right side of KU = F . Then that row of K is not zero sum.Otherwise K would be singular, if K ∗ ones(n, 1) = zeros(n, 1). Using block matrix notation, we can create the 2D matrix K = K2D from the familiar N by N second difference matrix K. We number the nodes of the square a193 3.6. SOLVING LARGE LINEAR SYSTEMS row at a time (this “natural numbering” is not necessarily best). Then the −1’s for the neighbor above and the neighbor below are N positions away from the main diagonal of K2D. The 2D matrix is block tridiagonal with tridiagonal blocks: ⎡ ⎤ ⎤⎡ 2 −1 K +2I −I ⎢⎢⎣ −1 2 −1 · · · ⎥⎥⎦ K2D = ⎢⎢⎣ −I K +2I −I · · · ⎥⎥⎦ K = (1) −12 −IK +2I Size N Elimination in this order: K2D has size n = N2 Time N Bandwidth w = N, Space nw = N3 , Time nw2 = N4 The matrix K2D has 4’s down the main diagonal. Its bandwidth w = N is the distance from the diagonal to the nonzeros in −I. Many of the spaces in between are filled during elimination! Then the storage space required for the factors in K = LU is of order nw = N3 . The time is proportional to nw2 = N4,when n rows each contain w nonzeros, and w nonzeros below the pivot require elimination. Those counts are not impossibly large in many practical 2D problems (and we show how they can be reduced). The horrifying large counts come for K3D in three dimensions. Suppose the 3D grid is numbered by square cross-sections in the natural order 1,...,N.Then K3D has blocks of order N2 from those squares. Each square is numbered as above to produce blocks coming from K2D and I = I2D: ⎤⎡ K2D +2I −I Size n = N3 K3D = ⎢⎢⎣ −I K2D +2I −I · · · ⎥⎥⎦ Bandwidth w = N2 Elimination space nw = N5 −IK2D +2I Elimination time ≈ nw2 = N7 Now the main diagonal contains 6’s, and “inside rows” have six −1’s. Next to a point or edge or corner of the boundary cube, we lose one or two or three of those −1’s. The good way to create K2D from K and I (N by N)is to use the kron(A, B) command. This Kronecker product replaces each entry aij by the block aij B.To take second differences in all rows at the same time, and then all columns, use kron: K2D = kron(K, I)+ kron(I,K) . (2) The identity matrix in two dimensions is I2D = kron(I,I). This adjusts to allow rectangles, with I’s of different sizes, and in three dimensions to allow boxes. For a cube we take second differences inside all planes and also in the z-direction: K3D = kron(K2D,I)+ kron(I2D,K) . Having set up these special matrices K2D and K3D, we have to say that there are special ways to work with them. The x, y, z directions are separable. The geometry (a box) is also separable. See Section 7.2 on Fast Poisson Solvers. Here the matrices K and K2D and K3D are serving as models of the type of matrices that we meet.194 CHAPTER 3. BOUNDARY VALUE PROBLEMS Minimum Degree Algorithm We now describe (a little roughly) a useful reordering of the nodes and the equations in K2DU = F. The ordering achieves minimum degree at each step—the number of nonzeros below the pivot row is minimized. This is essentially the algorithm used in MATLAB’s command U = K\F ,when K has been defined as a sparse matrix. We list some of the functions from the sparfun directory: speye (sparse identity I) nnz (number of nonzero entries) find (find indices of nonzeros) spy (visualize sparsity pattern) colamd and symamd (approximate minimum degree permutation of K) You can test and use the minimum degree algorithms without a careful analysis. The approximations are faster than the exact minimum degree permutations colmmd and symmmd. The speed (in two dimensions) and the roundoff errors are quite reasonable. In the Laplace examples, the minimum degree ordering of nodes is irregular com-pared to “a row at a time.” The final bandwidth is probably not decreased. But the nonzero entries are postponed as long as possible! That is the key. The difference is shown in the arrow matrix of Figure 3.18. On the left, minimum degree (one nonzero off the diagonal) leads to large bandwidth. But there is no fill-in. Elimination will only change its last row and column. The triangular factors L and U have all the same zeros as A. The space for storage stays at 3n, and elimination needs only n divisions and multiplications and


View Full Document

MIT 18 086 - Solving Large Linear Systems

Download Solving Large Linear Systems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Solving Large Linear Systems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Solving Large Linear Systems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?