Review• We model a surfer randomly clicking webpages.LetPRn(A) be the probability that he is at A (after n steps).PRn(A) = PRn− 1(B) ·12+ PRn−1(C) ·11+ PRn− 1(D) ·01PRn(A)PRn(B)PRn(C)PRn(D)=0121 0130 0 0130 0 113120 0=TPRn − 1(A)PRn − 1(B)PRn − 1(C)PRn − 1(D)A BC D• The transition matrix T is a Markov mat rix.Its columns add to1 and it has no negative entries.• The Page rank of page A is PR(A) = PR∞(A). (assum ing the lim it exists)It is the probability that th e surfer is at pageA after n steps (with n → ∞).• The PageRank vectorPR(A)PR(B)PR(C)PR(D)satisfiesPR(A)PR(B)PR(C)PR(D)= TPR(A)PR(B)PR(C)PR(D).It is an eigenvector of the transi tion matrixT with eigenvalue 1.•−1121 013−1 0 0130 −1 113120 −1>RREF1 0 0 −20 1 0 −230 0 1 −530 0 0 0eigenspace of λ = 1 spann e d by223531PR(A)PR(B)PR(C)PR(D)=316223531=0.3750.1250.3130.188This the PageRank vector.• The corresponding rankin g of the webpages is A, C , D, B.Remark 1. In practic al situation s, the system might be too large for find ing the eigen-vector by elimination.• Google reports having met a bout 60 trillion webpagesGoogle’s search index is over 100,000,000 gigabytesNumber of Google’s servers secret; about 2,500,000More than 1,000,000,000 websites (i.e. hostnames; abou t 75% not active)Armin [email protected]• Thus we have a gi gantic but ver y sparse matrix .An alternative to e limin ation i s the power method:If T is an (acyc lic and irreducible) M arkov matrix, then for any v0the vectors Tnv0converge to an eigenvec tor with eigenvalue 1.Here:T =0121 0130 0 0130 0 113120 0PR(A)PR(B)PR(C )PR(D)=0.3750.1250.3130.188T1/41/41/41/4=3/81/121/35/24=0.3750.0830.3330.208Note that the ranking of the webpages is alread y A, C, D, B if we sto p here.T21/41/41/41/4=0.3750.1250.3330.167T31/41/41/41/4=0.3960.1250.2920.188Remark 2.•If all entries of T are positive, then the power method is guaranteed to work.• In the conte xt of PageRank, we can make sure that thi s is the case, by replacing Twith(1 − p) ·0121 0130 0 0130 0 113120 0+ p ·14141414141414141414141414141414.Just to make sure: still a Markov matrix, now with positive entriesGoogle used to usep = 0.15.• Why does Tnv0converge to an eigenvector with eigenvalue 1?Under the assumptions on T , its other eigenvalue s λ satisfy |λ| < 1.Now, think in terms of a basisx1,, xnof eigenvectors:Tm(c1x1++ cnxn) = c1λ1mx1++ cnλnmxnAs m increases, the terms with λimfor λi1 go to zero, and what is left over is an eigenvectorwith eigenvalue 1.Armin [email protected] differential equationsExample 3. Which fu n c tions y(t) satisfy the differential equation y′= y?Solution:y(t) = etand, more generally, y(t) = Cet. (And nothing else.)Recall f r om Calcul us the Taylor serieset= 1 + t +t22!+t33!+Example 4. The differential equation y′= ay with ini tial condition y(0) = C is solvedbyy(t) = Cea t. (T his solution is unique.)Why? Be causey′(t) = aCea t= ay(t) and y(0) = C.Example 5. Our goa l is to solve (systems of) differential equations like:y1′= 2y1y2′= −y1+3y2+y3y3′= −y1+y2+3y3y1(0) = 1y2(0) = 0y3(0) = 2In matrix form:y′=2 0 0−1 3 1−1 1 3y , y(0) =102Key idea : to solve y′= Ay, introduce eA tArmin [email protected] of diagonalization• If Ax = λx, then x is an eigenvector of A with eigenv alue λ.• Put the eigen vectors x1,, xnas col umns in to a matrix P .Axi= λixiA| |x1xn| |=| |λ1x1λnxn| |=| |x1xn| |λ1λn• In summary: AP = PDLet A be n × n with in depende nt eigen vectors x1,, xn.ThenA c an be diagonalized as A = PDP−1.• the columns of P are the eigenvectors• the diagonal matrix D has the eigenvalues on the diagonalExample 6. Diagonalize the following matrix, if possible.A =2 0 0−1 3 1−1 1 3Solution.• A has eigenva l ues 2 and 4. (W e did that in an earlier class!)◦ λ = 2:0 0 0−1 1 1−1 1 1eigenspace span(110,101)◦ λ = 4:−2 0 0−1 −1 1−1 1 −1eigenspace span(011)• P =1 1 01 0 10 1 1and D =224• A = PDP−1For many application s, it is not needed t o compute P−1explicitly.• We can check this by ver ifying AP = PD:2 0 0−1 3 1−1 1 31 1 01 0 10 1 1=1 1 01 0 10 1 1224Armin
View Full Document