DOC PREVIEW
CORNELL CS 404 - Study Guide

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

ITPACK 2C: A FORTRAN Package for SolvingLarge Sparse Linear Systems by AdaptiveAccelerated Iterative Methods∗David R. Kincaid, John R. Respess, and David M. YoungUniversity of Texas at Aus tin†Roger G. GrimesBoeing Computer Services Company‡April 2, 2002AbstractITPACK 2C is a collection of seven FORTRAN subroutines for solving large sparselinear systems by adaptive accelerated iterative algorithms. Basic iterative procedures,such as the Jacobi metho d, the Successive Overrelaxation method, the Symmetric Suc-cessive Overrelaxation method, and the RS method for the reduced system are com-bined, where possible, with acceleration procedures such as Chebyshev (Semi-Iteration)and Conjugate Gradient for rapid convergence. Automatic selection of the accelera-tion parameters and the use of accurate stopping criteria are major features of thissoftware package. While the ITPACK routines can be called with any linear systemcontaining positive diagonal elements, they are the most successful in solving systemswith symmetric positive definite or mildly nonsymmetric coefficient matrices.1 IntroductionFor several years, we have been involved with the development and use of research-orientedprograms using iterative algorithms for solving large sparse linear systemsAu = bwith positive diagonal elements. One solves for the N component unknown vector u given theN × N nonsingular coefficient matrix A and the N component right-hand side vector b. The∗Work on this project was supported in part by National Science Foundation Grant MCS 79-19829 atThe University of Texas at Austin.†Center for Numerical Analysis, RLM Bldg. 13.150, University of Texas, Austin, TX 78712‡Boeing Computer Services Company, 565 Andover Park West, Tukwila, WA 980671current ITPACK software package of s ubroutines, version 2C, provides for the use of sevenalternative iterative procedures. While these subroutines are not designed as productionsoftware, they should successfully handle industrial problems of moderate size, that is, onesthat fit in high-speed memory. This package is written in standard FORTRAN-66 code. Ithas b e en tested over a wide variety of computer systems using various FORTRAN compilers,including one which is FORTRAN-77 compatible (see Acknowledgements).The seven iterative solution modules are based on several basic iterative procedures, suchas the Jacobi me thod, the Successive Overrelaxation (SOR) me thod, the Symmetric SOR(SSOR) method, and the RS method for the reduced system. With the exception of SOR,the convergence of these basic methods are accelerated by Chebyshev (Semi-Iteration, SI) orConjugate Gradient (CG) acceleration. All methods are available with adaptive parameterestimation and automatic stopping tests. When using the RS method it is required thatthe linear system be reordered into a “red-black”1system [6, 12]. A switch to compute, ifpossible, the red-black indexing, permute the linear system, and permute associated vectorsis provided.The successful convergence of iterative methods may be dependent on conditions that aredifficult to determine in advance. For example, determining whether the coefficient matrixis positive definite can be as costly to check as solving the system. On the other hand, someconditions affecting convergence, such as positive diagonal elements, diagonal dominance,and symmetry are relatively easy to verify. For some applications, the theory may not existto guarantee the convergence of an iterative method. The algorithms in ITPACK have beentested most extensively for linear systems arising from elliptic partial differential equations.The routines can be applied, formally, to any linear system which fits in high-speed memory.However, rapid convergence, and indeed convergence itself cannot be guaranteed unless thematrix of the system is symmetric and positive definite. Success can be expected, thoughnot guaranteed, for mildly nonsymmetric systems. In other words, iterative methods maynot converge when applied to systems with coefficient matrices which are completely generalwith no special properties.This article discusses the usage of ITPACK and gives a few test results. The descriptionof the iterative methods is given in [4]. The underlying theory on which the iterativealgorithms are based is described in [6]. A survey of the iterative methods in ITPACK ispresented in [11].Throughout this paper, we adopt notation such as SOR() when referring to a subroutineand A(*) for a single-dimensioned array. The residual vector is b − Au(n)for the linearsystem Au = b and the pseudo-residual vector is Gu(n)+k−u(n)for a basic iterative methodof the form u(n+1)= Gu(n)+ k. The smallest and largest eigenvalues of the iteration matrix1In this ordering, the components of the unknown vector u are considered as either “red” or “black”. A“red-black ordering” is any ordering such that every black unknown follows all of the red unknowns. Thisordering of unknowns leads to a 2 × 2 “red-black partitioning” of the coefficient matrix, that is, a matrix ofthe formhDRHK DBiwith diagonal submatrices DRand DB. The original linear system may require rearran geme nt in order toarrive at this form.2G are denoted m(G) and M(G), respectively.2 Sparse Matrix StorageThe sparse storage scheme used in ITPACK is a common one. It is a row-wise representationof the nonzero entries in the coefficient matrix of the linear system. For a nonsymmetriccoefficient matrix, all of the nonzero values in each row are stored in a contiguous blockof data in a real-valued array A(*). If the matrix is symmetric, computer memory can besaved by storing only the nonzero entries in each row on and above the main diagonal. Foreither nonsymmetric or symmetric sparse storage, ass ociated column numbers are storedin an integer-valued array JA(*) such that JA(K) is the column number for the valueA(K). A mapping vector IA(*) is used to denote the starting locations of each of thecontiguous blocks. The beginning of the linear block for row I is given by IA(I), the endby IA(I + 1) − 1, and its length by IA(I + 1) − IA(I). Thus, IA(*) will contain N + 1elements to accommodate a linear system of order N. The entries for each row may be storedin any order in the contiguous block for that row.For example, the coefficient matrix11. 0. 0. 14. 15.0. 22. 0. 0. 0.0. 0. 33. 0. 0.14. 0. 0. 44. 45.15. 0. 0. 45. 55.would be represented in nonsymmetric sparse storage asA(∗) =


View Full Document

CORNELL CS 404 - Study Guide

Download Study Guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study Guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study Guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?