DOC PREVIEW
MIT 18 085 - APPLIED LINEAR ALGEBRA

This preview shows page 1-2-3-4 out of 13 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 13 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MIT OpenCourseWare http://ocw.mit.edu 18.085 Computational Science and Engineering I Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.  CHAPTER 1 APPLIED LINEAR ALGEBRA 1.1 FOUR SPECIAL MATRICES An m by n matrix has m rows and n columns and mn entries. We operate on those rows and columns to solve linear systems Ax = b and eigenvalue problems Ax = λx. From inputs A and b (and from software like MATLAB) we get outputs x and λ.A fast stable algorithm is extremely important, and this book includes fast algorithms. One purpose of matrices is to store information, but another viewpoint is more important for applied mathematics. Often we see the matrix as an “operator.” A acts on vectors x to produce Ax. The components of x have a meaning— displacements or pressures or voltages or prices or concentrations. The operator A also has a meaning—in this chapter A takes differences. Then Ax represents pressure differences or voltage drops or price differentials. Before we turn the problem over to the machine—and also after, when we interpret A\b or eig(A)—it is the meaning we want, as well as the numbers. This book begins with four special families of matrices—simple and useful, absolutely basic. We look first at the properties of these particular matrices Kn,Cn, Tn,and Bn. (Some properties are obvious, others are hidden.) It is terrific to practice linear algebra by working with genuinely important matrices. Here are K2,K3,K4 in the first family, with −1 and 2 and −1 down the diagonals: ⎤⎡ ⎤⎡2 −1 0 0 2 −1 0 ⎣−1 2 −1 K2 = K3 = −1 2 2⎦−1 K4 ⎢⎢⎣ −1⎥⎥⎦ 2 −1 0 = 0 −1 2 −1 0 −1 2 0 0 −1 2 What is significant about K2 and K3 and K4, and eventually the n by n matrix Kn? I will give six answers in the same order that my class gave them—starting with four properties of the K’s that you can see immediately. 12 Chapter 1 Applied Linear Algebra 1. These matrices are symmetric. The entry in row i,column j also appears in row j,column i.Thus Kij = Kji, on opposite sides of the main diagonal. Symmetry can be expressed by transposing the whole matrix at once: K = KT . 2. The matrices Kn are sparse. Most of their entries are zero when n gets large. K1000 has a million entries, but only 1000 + 999 + 999 are nonzero. 3. The nonzeros lie in a “band” around the main diagonal, so each Kn is banded. The band has only three diagonals, so these matrices are tridiagonal. Because K is a tridiagonal matrix, Ku = f can be quickly solved. If the unknown vector u has a thousand components, we can find them in a few thousand steps (which take a small fraction of a second). For a full matrix of order n = 1000, solving Ku = f would take hundreds of millions of steps. Of course we have to ask if the linear equations have a solution in the first place. That question is coming soon. 4. The matrices have constant diagonals. Right away that property wakes up Fourier. It signifies that something is not changing when we move in space or time. The problem is shift-invariant or time-invariant. Coefficients are constant. The tridiagonal matrix is entirely determined by the three numbers −1, 2, −1. These are actually “second difference matrices” but my class never says that. The whole world of Fourier transforms is linked to constant-diagonal matrices. In signal processing, the matrix D = K/4 is a “highpass filter.” Du picks out the rapidly varying (high frequency) part of a vector u.It gives a convolution with 14 (−1, 2, −1). We use these words to call attention to the Fourier part (Chapter 4) of this book. Mathematicians call K a Toeplitz matrix , and MATLAB uses that name: The command K = toeplitz([ 2 −1 zeros(1, 2) ]) constructs K4 from row 1. Actually, Fourier will be happier if we make two small changes in Kn.Insert −1 in the southwest and northeast corners. This completes two diagonals (which circle around). All four diagonals of C4 wrap around in this “periodic matrix ”or “cyclic convolution”or circulant matrix: ⎤⎡ Circulant matrix C4 = ⎢⎢⎣ 2 −1 0 −1 −1 2 −1 0 0 −1 2 −1 −1 0 −1 2 ⎥⎥⎦ = toeplitz([ 2 −10 −1]) . This matrix is singular.It is not invertible. Its determinant is zero. Rather than computing that determinant, it is much better to identify a nonzero vector u that solves C4u =0. (If C4 had an inverse, the only solution to C4u =0 would be the zero vector. We could multiply by C4 −1 to find u = 0.) For this matrix, the column vector u of all ones (printed as u =(1, 1, 1, 1) with commas) solves C4u =0.1.1 Four Special Matrices 3 The columns of C add to the zero column. This vector u = ones(4, 1) is in the nullspace of C4. The nullspace contains all solutions to Cu =0. Whenever the entries along every row of a matrix add to zero, the matrix is certainly singular. The same all-ones vector u is responsible. Matrix multiplication Cu adds the column vectors and produces zero. The constant vector u =(1, 1, 1, 1) or u =(c, c, c, c) in the nullspace is like the constant C when we integrate a function. In calculus, this “arbitrary constant” is not knowable from the derivative. In linear algebra, the constant in u =(c, c, c, c) is not knowable from Cu =0. 5. All the matrices K = Kn are invertible. They are not singular, like Cn.There is a square matrix K−1 such that K−1K = I = identity matrix. And if a square matrix has an inverse on the left, then also KK−1 = I.This “inverse matrix ” is also symmetric when K is symmetric. But K−1 is not sparse. Invertibility is not easy to decide from a quick look at a matrix. Theoretically, one test is to compute the determinant. There is an inverse except when det K =0, because the formula for K−1 includes a division by det K. But computing the deter-minant is almost never done in practice! It is a poor way to find u = K−1f. What we actually do is to go ahead with the elimination steps that solve Ku = f. Those steps simplify the matrix, to make it triangular. The nonzero pivots on the main diagonal of the triangular matrix show that the original K is invertible. (Important: We don’t want or need K−1 to find u = K−1f. The inverse would be a full matrix,


View Full Document

MIT 18 085 - APPLIED LINEAR ALGEBRA

Download APPLIED LINEAR ALGEBRA
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view APPLIED LINEAR ALGEBRA and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view APPLIED LINEAR ALGEBRA 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?