Unformatted text preview:

Solving Scalar Linear Systems Iterative approachThe Sparsity Pattern of a Loop Circuit Matrix for a Random Circuit (with 1000 closed loops)gridcircuit.mMatrix Due To Random Grid CircuitThe Limits of FactorizationAlternative ApproachJacobi IterationCleaning The Scheme UpCouple of IterationsPseudo-Code For Jacobi MethodMatlab To The RescueRunning The Jacobi Iteration ScriptTry The Larger SystemsRun Time!Convergence HistoryIncreasing System SizeOk – so I kind of broke the rulesGoing Back To The Circuit SystemSlight Modification of the CircuitGauss-SeidelCleaning The Scheme UpFirst IterationTheorem First This Time!.Gauss-Seidel AlgorithmComparing Jacobi and Gauss-SeidelThe CatchVolunteer To Design A Parallel VersionProject 3 (serial part)Solving Scalar Linear Systems Iterative approachLecture 15MA/CS 471Fall 2003Some matlab scripts to construct various types of random circuit loop matricesare available at the class website:The Sparsity Pattern of a Loop Circuit Matrix for a Random Circuit (with 1000 closed loops)bgridcircuit.mAn array of current loops with random resistors (resistors on circuit boundary not shown)Matrix Due To Random Grid CircuitbNote the large amount of structure in the loop circuit matrixThe Limits of Factorization• In the last class/lab we began to see that there are limits to the size of linear system solvable with matrix factorization based methods. • The storage cost for the loop current matrix built on a Cartesian circuit stored as a sparse NxN matrix is ~ 5*N• However, using LU (or Cholesky) and symmetric RCM the storage requirement is b*N which is typically at least an order of magnitude larger than the storage required for the loop matrix itself.• Also – memory spent on storing the matrices is memory we could have used for extra cells…Alternative Approach• We are going to pursue iterative methods which will satisfy the equationin an approximate way without an excessive amount of extra storage.• There are a number of different classes of iterative methods, today we will discuss an example from the class of stationary methods.=Ax bhttp://www.netlib.org/linalg/html_templates/node9.htmlJacobi Iteration• Example system:• Initial guess:• Algorithm: • i.e. for the I’th equation compute the I’th degree of freedom using the values computed from the previous iteration.513 236 232361xyzxyzxyz++=++=++=0000xyz===111513236 23236 1nnnnn nnnnxyzxy zxyz+++++=++=++=Cleaning The Scheme Up()()()111121 35133 26112 36nnnnnnnnnxyzyxzzxy+++=−−=−−=−−111513236 23236 1nnnnn nnnnxyzxy zxyz+++++=++=++ =Couple of Iterations()()()1001001001221 3551133 2621112 366xyzyxzzxy=−− ==−− ==−− =1stiteration:2nditeration:()()()211211211111112 1 3 2 1. 3.552651121113 3 2 3 3. 2.6656451121131 2 3 1 2. 3.665260xyzyxzzxy=−−=−− ==−−=−− =−=−− =−− =Pseudo-Code For Jacobi Method1) Build A, b2) Build modified A with diagonal zero Æ Q3) Set initial guess x=04) do{a) compute:b) compute error:c) update x: }while err>tol11jNiiijjjiixb x===−∑QA()()1,..,1,..,maxmaxiiiNiiNxxerrb==−=iixx=MatlabTo The RescueRunning The JacobiIteration Script• I ran the script with the stopping tolerance (tol) set to 1e-8:• Note that the error is of the order of 1e-8 !!! • i.e. the Jacobi iterative method computes an approximate solution to the system of equations!.Try The Larger Systems• First I made the script into a function which takes the rhs vector, system matrix and stopping tolerance as input. • It returns the approximate solution and the residual history.Run Time!• First I built a circuit matrix using the gridcircuit.m script (with N=100)• Then I built a random source vector for the right hand side.• Then I called the jacobisolve routine with:• [x,residuals] = jacobisolve(mat,b,1e-4);Convergence HistoryStoppingcriterion..Increasing System SizeNotice how the number of iterations required grew with NOk – so I kind of broke the rules• We set up the Jacobi iteration but did not ask the question “when will the Jacobi iteration converge and how fast?”Definition:A matrix A is diagonally dominant ifTheorem:If the matrix A is diagonally dominant then Ax=b has a unique solution x and the Jacobi iteration produces a sequence which converges to x for any initial guessInformally:The “more diagonally dominant” a matrix is the faster it will converge… this holds some of the time. {}nx0x1,Nijiijji=≠<∑AAGoing Back To The Circuit System• For the gridcircuit code I set up the resistors so that all the internal circuit loops shared all their resistors with other loops. • the current balance law => for internal cells the total cell resistance = sum of of resistances shared with neighboring cells…• i.e. the row sums of the internal cell loop currents is zero • i.e. the matrix is weakly diagonally dominant – and does not exactly satisfy the convergence criterion for the Jacobi iterative scheme. 1,Nijiijji=≠=∑AASlight Modification of the Circuit• I added an additional random resistor to each cell (i.e. increased the diagonal entry and did not change the off-diagonal entries).• This modification ensures that the matrix is now strictly diagonally dominant.Convergence history for diagonally dominant system – notice dramaticreduction in iteration countGauss-Seidel• Example system:• Initial guess:• Algorithm: • i.e. for the I’th equation compute the I’th degree of freedom using the values computed from the previous iteration and the new values just computed513 236 232361xyzxyzxyz++=++=++=0000xyz===111111351326233 612nnnnnnnnnxxyxyzyzz+++ +++++=++++==Cleaning The Scheme Up()()()111111121 35133 26112 36nnnnnnnnnxyzyxzzxy++++++=−−=− −=− −111111513236 232361nnnnnnnnnxyzxyzxyz++++++++=++=++=First Iteration()()()100101111121 35133 26112 36xyxxyzyzz=−−=−−=−−1stiteration:As soon as the 1 level values are computed, we use them in the next equations..Theorem First This Time!.• So we should first ask the questions1) When will the Gauss-Seidel iteration converge?2) How fast will it converge?Definition:A matrix is said to be positive definite if Theorem:If A is symmetric and positive definite, thenthe Gauss-Seidel iteration converges for any initial guess for xUnoficially:Gauss-Seidel will converge twice as fast in some cases as Jacobi.0 for all tN>∈xAx x \Gauss-Seidel Algorithm• We


View Full Document

Rice CAAM 471 - Lecture Notes

Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?