DOC PREVIEW
HARVARD MATH 19B - Lecture 9: Matrix algebra

This preview shows page 1 out of 2 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 2 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Math 19b: Linear Algebra with Probability Oliver Knill, Spring 2011Lecture 9: Matrix algebraIf A is a n × m matrix and A is a m × p matrix, then the matrix product AB is defined as then × p matrix with entries (BA)ij=Pmk=1BikAkj.1 If B is a 3 × 4 matrix, and A is a 4 × 2 matrix then BA is a 3 × 2 matrix. For example:B =1 3 5 73 1 8 11 0 9 2, A =1 33 11 00 1, BA =1 3 5 73 1 8 11 0 9 21 33 11 00 1=15 1314 1110 5.If A is a n × n matrix and T : ~x 7→ Ax has an inverse S, then S is linear. Thematrix A−1belong ing to S = T−1is called the inverse matrix of A.Matrix multiplication generalizes the common multiplication of numbers. We can write thedot product between two vectors as a matrix product when writing the first vector as a1 × n matrix (= row vector) and the second as a n × 1 matrix (=column vector) like in[1 2 3]234= 2 0. Note that AB 6= BA in general and for n × n matrices, the inverse A−1does not always exist, otherwise, for n × n matrices the same rules apply as for numbers:A(BC) = (AB)C, AA−1= A−1A = 1n, (AB)−1= B−1A−1, A(B + C) = AB + AC,(B + C)A = BA + CA etc.2 The entries of matrices can themselves be matrices. If B is a n × p matrix and A is ap × m matrix, and assume the entries a re k × k matrices, then BA is a n × m matrix, whereeach entry (BA)ij=Ppl=1BilAljis a k × k matrix. Partitioning matrices can be usefulto improve the speed of matrix multiplication. If A ="A11A120 A22#, where Aijare k × kmatrices with the property that A11and A22are invertible, then one can write the inverseas B ="A−111−A−111A12A−1220 A−122#is the inverse of A.3 Let us associate to a small bloging network a matrix0 1 1 11 0 1 01 1 0 11 0 1 0and look at the spreadof some news. Assume the source of the news about some politician is the first entry (maybethe gossip news ”gawker”)1000. The vector Ax has a 1 at the places, where the newscould be in the next hour. The vector (AA)(x) tells, in how many ways the news can go in2 steps. In our case, it can go in three different ways back to the page itself.Matrices help to solve combinatorial problems. O ne appears in the movie ”Good will hunt-ing”. For example, what does [A100] tell about the news distribution on a large network.What does it mean if A100has no zero entries?If A is a n ×n matrix and the system of linear equations Ax = y has a unique solution for ally, we write x = A−1y. The inverse matrix can be computed using Gauss-Jordan elimination.Lets see how this works.Let 1nbe the n×n identity matr ix. Start with [A|1n] and perform Gauss-Jordan elimination.Thenrref([A|1n]) = [1n|A−1]Pro of. The elimination process solves A~x = ~eisimultaneously. This leads to solutions ~viwhich are the columns of the inverse matrix A−1because A−1~ei= ~vi."2 6 | 1 01 4 | 0 1#hA | 12i"1 3 | 1/2 01 4 | 0 1#h.... | ...i"1 3 | 1/2 00 1 | −1/2 1#h.... | ...i"1 0 | 2 −30 1 | −1/2 1#h12| A−1iThe inverse is A−1="2 −3−1/2 1#.If ad − bc 6= 0, the inverse of a linear transformation ~x 7→ Ax with A ="a bc d#is given bythe matrix A−1="d −b−c a#/(ad − bc).Shear:A ="1 0−1 1#A−1="1 01 1#Diagonal:A ="2 00 3#A−1="1/2 00 1/3#Reflection:A ="cos(2α) sin(2α)sin(2α) − cos(2α)#A−1= A ="cos(2α) sin(2α)sin(2α) − cos(2α)#Rotation:A ="cos(α) sin(α)− sin(α) cos(−α)#A−1="cos(α) − sin(α)sin(α) cos(α)#Rotation=Dilation:A ="a −bb a#A−1="a/r2b/r2−b/r2a/r2#, r2= a2+ b2Homework due February 16, 20111 Find the inverse of the following matrixA =1 1 1 1 11 1 1 1 01 1 1 0 01 1 0 0 01 0 0 0 0.2 The probability density of a multivariate normal distribution cent ered at the origin isa multiple off(x) = exp(−x · A−1x)We will see the cova riance matrix later. It encodes how the different coo rdinates of arandom vector are correlated. AssumeA =1 1 10 1 11 1 0Find f(h[1, 2, −3]).3 This is a system we will analyze more later. Fo r now we are only interested in the alge-bra. Tom the cat moves each minute randomly from on sp ots 1,5,4 jumping to neigh-boring sites only. At the same time Jerry, the mouse, moves on spots 1,2,3, also jump-ing to neighboring sites. The possible position combinations (2, 5), (3, 4), (3, 1), (1, 4), (1, 1)and transitions are encoded in a matr ixA =0 1 1 1 01/4 0 0 0 01/4 0 0 0 01/4 0 0 0 01/4 0 0 0 1This means for example that we can go fr om state (2, 5) with equal probability to allthe other states. In state (3, 4) or (3, 1) the pair (Tom,Jerry) moves back to (2, 5).If state (1, 1) is reached, then Jerry’s life ends and Tom goes to sleep there. We canread off that the probability that Jerry gets eaten in one step as 1/4. Compute A4.The first column of this matrix gives t he probability distribution after four steps. Thelast entry of the first column gives the probability that Jerry g ot swallowed after 4


View Full Document

HARVARD MATH 19B - Lecture 9: Matrix algebra

Download Lecture 9: Matrix algebra
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 9: Matrix algebra and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 9: Matrix algebra 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?