UMD AMSC 663 - FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

Unformatted text preview:

AMSC 663 & 664 Final Report Minghao Wu 1FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS Minghao Wu Department of Mathematics University of Maryland, College Park Advisor: Dr. Howard Elman Department of Computer Science University of Maryland, College Park Abstract The goal of this report is to explore the numerical algorithm of finding rightmost eigenvalues of large sparse nonsymmetric parameterized eigenvalue problems, and how the rightmost eigenvalues vary as certain parameter changes. We implemented Arnoldi algorithm (both exact and inexact) and Implicitly Restarted Arnoldi algorithm with shift-invert transformation to reproduce computational results of several test problems in the literature. We also applied the algorithm to the study of dynamical equilibrium. Introduction Consider the eigenvalue problem Ax = λBx (1) where A and B are large sparse non-symmetric real N×N matrices and they usually depend on one or several parameters in real applications. We are interested in computing its rightmost eigenvalues (namely, eigenvalues with the largest real parts). The motivation lies in the determination of the stability of steady state solutions of nonlinear systems of the formAMSC 663 & 664 Final Report Minghao Wu 2 NNNRuRRfufdtduB ∈→= ,:)( (2) with large N and where u represents a state variable (velocity, pressure, temperature, etc). B is often called the mass matrix. Define the Jacobian matrix for the steady state u* by A = df/dx(u*), then u* is stable if all the eigenvalues of (1) have negative real parts. Typically, f arises from the spatial discretization of a PDE (partial differential equation). When finite differences are used to discretize a PDE, then often B = I and (1) is called a standard eigenproblem. If the equations are discretized by finite elements, then the mass matrix B ≠ I and (1) is called a generalized eigenvalue problem. For problems arising from fluid mechanics, B is often singular[1]. Beside the numerical algorithm of computing the rightmost eigenvalues of (1), we are also interested in exploring the effect of parameters on the stability of steady state solutions. In particular, we are interested in finding the critical parameter value under which the rightmost eigenvalues cross the imaginary axis thus indicates that the steady state solution becomes unstable. This is an important part of the study of bifurcation of dynamical equilibrium. The plan of this report is as follows: in the first part, we will describe and explain the algorithms we implemented; in the second part, computational results together with some technical details for several test problems will be presented. Methodology Eigenvalue Solvers Since both A and B are large and sparse, direct methods such as the QZ algorithm for the generalized problem and the QR algorithm for the standard problem are not feasible. A more efficient approach is to solve the standard eigenvalue problem Tx = θx, which is a transformation of (1), by iterative methods like Arnoldi algorithm, subspace iteration and Lanczos' method[1]. Another reason why iterative methods are more suitable for ourAMSC 663 & 664 Final Report Minghao Wu 3problem is that we are only interested in computing the rightmost part of the spectrum instead of the complete spectrum. The eigenvalue solvers we implemented are Arnoldi algorithm and its variants, such as the Implicitly Restarted Arnoldi (IRA) algorithm. 1. Basic Arnoldi Algorithm Arnoldi algorithm is an iterative eigensolver based on Krylov subspaces. Given a matrix A and a random normal vector u1, a k-dimensional Krylov subspace is the subspace spanned by the columns of the following matrix: Kk(A,u1) = [u1 Au1 A2u1 … Ak-1u1] provided that they are linearly independent. k+1 steps of Arnoldi algorithm give us the following decomposition: AUk = UkHk + βkuk+1ekT (3) where Uk is an orthonormal basis of the k-dimensional Krylov subspace, uk+1 is normal and orthogonal to Uk, Hk is a k×k upper Hessenberg matrix, βk is a scalar and ek is the k×1 vector [0 0 … 0 1]. We usually choose k such that k<<N (recall that the dimension of the matrices A and B is N×N ). Leftmultiply (3) by UkT: UkTAUk = UkTUkHk + βkUkTuk+1ekT = Hk. (4) Therefore, Hk is the projection of A onto the k-dimensional Krylov subspace. We hope that the eigenvalues of Hk can be good approximation of those of A so that we can solve a much smaller eigenvalue problem instead. Suppose (λ,z) is an eigenpair of Hk. Rightmultiply (4) by z: UkTA(Ukz) = Hkz = λz = λ(UkTUk)z = UkTλ(Ukz). (5) We can view (λ,Ukz) as an approximated eigenpair of A with residual A(Ukz) – λ(Ukz) and obtain the following from (5): UkT(A(Ukz) – λ(Ukz)) = 0. This means the residual is always orthogonal to the k-dimensional Krylov subspace. Fig. 1 illustrates their relation.AMSC 663 & 664 Final Report Minghao Wu 4 Fig. 1 The relation among A(Ukz), λ(Ukz) and the residual between them As k increases, the residual will decrease and in the limit case, when k = N, A(Ukz) = λ(Ukz), which means (λ,Ukz) is an exact eigenpair of A. However, we should point out that it is not a good idea to apply it naively to our problem. There are mainly two reasons: (1) since we are very often dealing with large matrices, increasing k to improve the performance of Arnoldi algorithm is not practical; for example, suppose A is of size 10,000×10,000 and k = 100, we need 10 megabytes to store the Krylov basis to double precision[2]; (2) in problems arise from fluid mechanics, mass matrix B is singular and this will give rise to spurious eigenvalues if we do not do anything clever. A lot of variants of Arnoldi algorithm follow the line of restarting the basic Arnoldi algorithm with a more carefully chosen vector u1. We implemented one of them, the Implicitly Restarted Arnoldi algorithm (IRA). 2. Implicitly Restarted Arnoldi (IRA) Algorithm The basic idea of IRA is to filter out unwanted eigendirections from the original starting vector u1 by using the most recent spectrum information and also a clever filtering


View Full Document
Download FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?