DOC PREVIEW
UCSD CSE 252C - Krylov Subspace Methods

This preview shows page 1-2-24-25 out of 25 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 25 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Krylov Subspace Methods for the Eigenvalue problemPresented by: Sanjeev KumarApplicationsWe need only few eigen (singular) pairs, and matrices can be large and sparseSolving homogeneous system of linear equations A x = 0. Solution is given by right singular vector of A corresponding to smallest singular valuePrincipal component analysisWe are interested in eigen pairs corresponding to few largest eigenvaluesDiscretization of Partial differential equationSpectral image segmentationReferences:Bai, Z., Demmel, J., Dongarra, J., Ruhe, A., and van der Vorst, H., editors. 2000. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide, SIAM, Philadelphia, PA . http://www.cs.utk.edu/~dongarra/etemplates/book.htmlD.C. Sorensen, Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculationshttp://techreports.larc.nasa.gov/icase/1996/icase-1996-40.pdfC.D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAMhttp://www.matrixanalysis.com/DownloadChapters.htmlMarco Latini, The QR algorithm past and present http://www.acm.caltech.edu/~mlatini/research/presentation-qr-feb04.pdfJ. Shi and J. Malik. “Normalized Cuts and Image Segmentation”, IEEE PAMI, Vol. 22, No. 8, August 2000Review:Eigenvalue and Eigenvector If A x = λ xwhere,A ∈ Rn nx : vectorλ : scalarThen,λ : eigenvaluex : eigenvector(λ,x) : eigen pairReview:Eigenvalue decomposition (EVD) A V = V D V = [ x1, x2, L, xn]Xi’s are eigenvectors D = diag( λ1, λ2, L, λn)λI’s are eigenvalues In Matlab: [ V, D ] = eig( A )Review:Characteristic Equation Eigenvalues are roots of the polynomial equationdet( A - λ I ) = 0I : n × n Identity matrixdet(.) : determinant of the matrixPolynomial equation of degree nReview:Companion Matrix Roots of a polynomial equationxn+ αn-1xn-1 + L + α1x + α0 = 0are given by eigenvalues of the matrixReview:Abel-Rufini’s Theorem TheoremThere are no algebraic formulae for roots of a general polynomial with degree greater than 4 ConsequenceAs opposed to solving linear system of equations, iteration is the only way for eigenvalue computations for a general matrixReview:Eigenvector Expansion(λi, xi) are eigen pairs of matrix ALet us express any vector v as linear combination of eigenvectors,v = c1x1+ + cnxn Result of successive multiplication by A can be represented as,A v = λ1c1x1+ + λncnxn(Aj) v = λ1jc1x1+ + λnjcnxnUseful laterProblem Statement Given a matrix A ∈ Rnn, find k eigen pairs corresponding to eigenvalues with properties such asLargest (or smallest) absolute valueLargest (or smallest) real partNearest to given scalar µPower Iteration Basic iteration step: Analysis(λi, xi) : eigen pair of A, arranged in decreasing order of abs(λi)Power Iteration, cont. Eigenvalue estimate : Convergence rate (eigenvector): Disadvantages:Very slow convergence if λ1 λ2Cannot find complex eigenvaluesOnly finds largest eigenvalueSpectral Transformation A ∈ C n nhas eigen pair (λ, x) p(τ) and q(τ) are polynomials in τ Polynomial transformationp(A) has eigen pair (p(λ), x) Rational transformation[q(A)]-1p(A) has eigen pair ( p(λ)/q(λ), x) Shift-Invert(A-µ I)-1has eigen pair ( 1/(λ-µ), x)Inverse Iteration Use a prior estimate of eigenvalue in shift-invert transformation Due to ill-conditioning, linear solver preferred to inverse Pre-factorize to keep per iteration complexity lowRayleigh Quotient Iteration Use current estimate of eigenvalue as shift AdvantagesFaster convergence: quadratic in general and cubic for hermitian problem DisadvantagesPer iteration complexity highSubspace Iteration Used for finding multiple eigenvalues simultaneously. Generalization of power iteration to multiple vectors. Need better normalization than individually normalizing each vector otherwise every vector will converge to v1Subspace Iteration Start with Q0∈ Cnkwhose columns are orthonormal Iteration stepsZj= A Qj-1Orthonormalize Zj= XjRjColumns of Xjare orthonormalRjis upper triangularQj= XjTest for convergence Convergence rate = Upper Hessenberg Matrix Upper Hessenberg MatrixH(i,j) = 0 for i>(j+1)Hermitian ⇒ Tri-diagonal A → HHouseholder reductionGivens rotationBoth are O(n3) in generalSchur’s Triangularization Theorem ∀ A ∈ C n n, ∃ Q, R,such thatQ is a unitary matrix (not unique)R is an upper triangular matrix (not unique)A Q = Q RDiagonal elements of R are eigenvalues of A A → R → { λ } 2ndstep is trivial but 1ststep is very difficult A → H → { λ }In practice, we go from A to upper hessenberg HQR Iteration Iteration steps:Ak= QkRkAk+1= RkQk Every iteration is similarity and hence preserves eigenvaluesAk+1= QkTAkQk If converges then converges to A= R Doesn’t converge for matrices with complex or negative eigenvaluesExplicitly shifted QR iterationConvert A to upper hessenberg H using similarity transformationIteration stepsHk- αk= QkRkHk= RkQk+ αkEfficiency considerationsHouseholder reduction is efficient for A HQR factorization of H can be done efficiently using Givens rotationRelated to inverse power iteration with shift αkImplicitly shifted QR iteration Combine two complex conjugate shifts Can handle complex eigenvalues and eigenvectors using real arithmetic, thus increasing efficiencyDefinitions For A ∈ Cn nand 0 ≠ b ∈ Cn 1,{ b, Ab, A2b, L, Aj-1b } : Krylov sequenceKj= span{ b, Ab, L, Aj-1b } : Krylov subspaceKnj= ( b | Ab | L | Aj-1b ) : Krylov matrixKi= Range( Knj)Krylov Subspace Let A have n distinct eigenvalues λ1, L, λn, with orthonormal eigenvectors x1, L, xnwhich form an orthonormal basis for Rn. For any vector b ∈ Rn, b = c1x1+ L + cnxn Let’s analyze the structure of Krylov matrix, Knj= ( b | Ab | L | Aj-1b )Krylov Subspace If ci≠ 0 ∀ i, Rank(Knj) = min( j, n) If number of non-zero ci’s is m,Rank(Knj) = min( j, m)Basis for Krylov Subspace Krylov sequence forms a basis for Krylov subspace but it is ill-conditioned. Better to work with an orthonormal basis. Lanczos algorithm builds an orthonormal basis for Krylov subspace for hermitian matrices. Arnoldi algorithm generalizes this to non-hermitian


View Full Document

UCSD CSE 252C - Krylov Subspace Methods

Download Krylov Subspace Methods
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Krylov Subspace Methods and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Krylov Subspace Methods 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?