DOC PREVIEW
MIT 18 02 - Matrices and Linear Algebra

This preview shows page 1-2-3-4 out of 11 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 11 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

MIT OpenCourseWare http://ocw.mit.edu 18.02 Multivariable CalculusFall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.M. Matrices and Linear Algebra 1. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices. In general, they need not be square, only rectangular. A rectangular array of numbers having m rows and n columns is called an m x n matrix. The number in the i-th row and j-th column (where 1 5 i 5 m, 1 5 j 5 n) is called the ij-entry, and denoted aij; the matrix itself is denoted by A, or sometimes by (aij). Two matrices of the same size are equal if corresponding entries are equal. Two special kinds of matrices are the row-vectors: the 1 x n matrices (al, az, . . . ,a,); and the column vectors: the m x 1matrices consisting of a column of m numbers. From now on, row-vectors or column-vectors will be indicated by boldface small letters; when writing them by hand, put an arrow over the symbol. Matrix operations There are four basic operations which produce new matrices from old. 1. Scalar multiplication: Multiply each entry by c : cA = (caij) 2. Matrix addition: Add the corresponding entries: A + B = (aij + bij); the two matrices must have the same number of rows and the same number of columns. 3. Transposition: The transpose of the m x n matrix A is the n x m matrix obtained by making the rows of A the columns of the new matrix. Common notations for the transpose are AT and A'; using the first we can write its definition as AT = (aji). If the matrix A is square, you can think of AT as the matrix obtained by flipping A over around its main diagonal. 2 -3 15 Example 1.1 Let A = . FindA+B, AT, 2A-3B.2 18.02 NOTES 4. Matrix multiplication This is the most important operation. Schematically, we have mxn nxp mxP The essential points are: 1. For the multiplication to be defined, A must have as many columns as B has rows; 2. The ij-th entry of the product matrix C is the dot product of the i-th row of A with the j-th column of B. The two most important types of multiplication, for multivariable calculus and differential equations, are: 1. AB, where A and B are two square matrices of the same size -these can always be multiplied; 2. Ab, where A is a square n x n matrix, and b is a column n-vector. Laws and properties of matrix multiplication M-1. A(B + C) = AB + AC, (A + B)C = AC + BC distributive laws M-2. (AB)C = A(BC); (cA) B = c(AB). associative laws In both cases, the matrices must have compatible dimensions. M-3. Let I3= ; then AI = A and IA = A for any 3 x 3 matrix. I is called the identity matrix of order 3. There is an analogously defined square identity matrix Inof any order n, obeying the same multiplication laws. M-4. In general, for two square nxn matrices A and B, AB # BA: matrix multiplication is not commutative. (There are a few important exceptions, but they are very special -for example, the equality AI = IA where I is the identity matrix.) M-5. For two square n x n matrices A and B, we have the determinant law: lABl = IAIJBI, also written det(AB) = det(A)det(B) For 2 x 2 matrices, this can be verified by direct calculation, but this naive method is unsuitable for larger matrices; it's better to use some theory. We will simply assume it in these notes; we will also assume the other results above (of which only the associative law M-2 offers any difficulty in the proof).M. MATRICES AND LINEAR ALGEBRA 3 M-6. A useful fact is this: matrix multiplication can be used to pick out a row or column of a given matrix: you multiply by a simple row or column vector to do this. Two examples should give the idea: (i % g ) (8) = (i)the second column (1 0 0) (: 4 :5 6 :) =(1 2 3) thefirstrowExercises: Section 1F 2. Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations, an important application of mathematics to real-world problems in engineering, business, and science, especially the social sciences. Here we will just stick to the most important case, where the system is square, i.e., there are as many variables as there are equations. In low dimensions such systems look as follows (we give a 2 x 2 system and a 3 x 3 system): In these systems, the aij and bi are given, and we want to solve for the xi. As a simple mathematical example, consider the linear change of coordinates given by the equations If we know the y-coordinates of a point, then these equations tell us its x-coordinates immediately. But if instead we are given the x-coordinates, to find the y-coordinates we must solve a system of equations like (7) above, with the yi as the unknowns. Using matrix multiplication, we can abbreviate the system on the right in (7) by where A is the square matrix of coefficients (aij ). (The 2 x 2 system and the n x n system would be written analogously; all of them are abbreviated by the same equation Ax = b, notice.) You have had experience with solving small systems like (7) by elimination: multiplying the equations by constants and subtracting them from each other, the purpose being to4 18.02 NOTES eliminate all the variables but one. When elimination is done systematically, it is an efficient method. Here however we want to talk about another method more compatible with hand- held calculators and MatLab, and which leads more rapidly to certain key ideas and results in linear algebra. Inverse matrices. Referring to the system (8), suppose we can find a square matrix M, the same size as A, such that (9) MA = I (the identity matrix). We can then solve (8) by matrix multiplication, using the successive steps, where the step M(Ax) = x is justified by M(Ax) = (MA)x, by M-2; = Ix, by (9); = x, by M-3 . Moreover, the solution is unique, since (10) gives an explicit formula for it. The same procedure solves the problem of determining the inverse to the linear change of coordinates x = Ay, as the next example illustrates. 12Example 2.1 Let A = 2, Verify that M satisfies (9) (2 3) and M = (-32 -1' above, and use it to solve the first system below for xi and the second for the yi in terms of the xi: 12Solution. We have (2 3) (-32 -12, = (i :), by matrix multiplication. To solve the first system, we


View Full Document

MIT 18 02 - Matrices and Linear Algebra

Documents in this Course
Vectors

Vectors

1 pages

Exam 1

Exam 1

2 pages

Load more
Download Matrices and Linear Algebra
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Matrices and Linear Algebra and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Matrices and Linear Algebra 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?