**Unformatted text preview:**

AMSC/CMSC 460 Computational Methods, Fall 2007UNIT 4: Part 2: Linear Systems of EquationsDianne P. O’Learyc°2001, 2002, 2007Solution of Linear SystemsRead: Chapter 6. Skip: 6.2.The plan• Motivation• Strategy• Algorithms• AnalysisMotivationWhen are linear systems used?Notation:Ax = bA : n × n, x : n × 1, b : n × 1.Elements real or complex numbers.The problem is to find x, given A and b.Why study this problem?• Mathematical models frequently yield linear systems.• Linear systems are subproblems in– determining weights in numerical integration.– solving nonlinear equations (See Chapter 8.).– solving partial differential equations.– etc.1One large government lab reported that 75% of the calls to numerical softwarelibraries were calls to linear system solvers!Clearly, it is important to be able to solve linear systems quickly and accurately!Strategy• Exploit the structure of the problem.Example: If A has lots of zeros, we don’t want to replace them bynonzeros. []• Reduce the problem to one that has an obvious solution.• Choose algorithms that have reasonable costs.• Be able to evaluate the quality of our answer.One algorithm: Cramer’s ruleSuppose that we have 2 equations and 2 variables.x1=¯¯¯¯b1a12b2a22¯¯¯¯¯¯¯¯a11a12a21a22¯¯¯¯, x2=¯¯¯¯a11b1a21b2¯¯¯¯¯¯¯¯a11a12a21a22¯¯¯¯If we apply this algorithm to a problem withn = 20, it would take about 3 × 108years to solve it on a computer.Problems withn = several million are important in airline scheduling, stockmarket analysis, weather prediction, power plant safety, traffic analysis, etc.And Google searches make use of a matrix with size n ≈ 3.5 billion.Perhaps we should look for another algorithm....A second algorithm: use the matrix inverse2We can solveAx = bby multiplying both sides of the equation by A−1:A−1Ax = x = A−1b .Therefore, we can solve linear systems by multiplying the right-hand side b byA−1.This is aBAD idea. It is more expensive than the algorithms we will discuss (theLU factorization) and it generally computes an answer that has larger error.Whenever you see a matrix inverse in a formula,think “LU factorization”.Computing the inverse of a matrix is almost ALWAYS a bad idea!!!What problems are easy?2 examples.Example 1: Suppose A is diagonal.2 0 00 3 00 0 5x1x2x3=567Thenx =5/26/37/5[]Example 2: Suppose A is triangular2 0 01 3 02 1 5x1x2x3=27−1Then the first equation tells us that 2x1= 2, sox1= 1 .The second equation saysx1+ 3x2= 7 .Knowing that x1= 1, we see that 3x2= 7 − 1, sox2= 2 .3Now that we know x1and x2, the third equation tells us that5x3= −1 − 2x1− x2,sox3= −1 .So our solution isx =12−1[]Unquiz:Write two versions of the forward substitution algorithm for solving alower triangular system Lx = b.• In the first, use sdot.• In the second, use saxpy.Note that it is just as easy to solve upper triangular systemsA =× × ×0 × ×0 0 ×AlgorithmsThe LU factorization and Gauss EliminationIf we could always reduce our problem to triangular form, we would be finished!Our tool for this reduction is Gauss elimination and the related LU factorization.See powerpoint example.Can we always reduce a matrix to triangular form?Example:0 4 44 0 22 0 2x1x2x3=420with solution x = [1, 2, −1]T.Unquiz: Apply Gauss Elimination to this problem.[]4A complicationWe see that Gauss elimination breaks down if the pivot element (the maindiagonal element ajjat stage j is zero.What happens if it isnearly zero?Example:10−64 44 0 22 0 2x1x2x3=4 + 10−620with solution x = [1, 2, −1]T.Unquiz: Apply Gauss Elimination to this problem.The fixWe have trouble with Gauss Elimination if the pivot element is close to zero.How can we guarantee that this never happens?Note that we have one extra tool available to us: if we interchange two rows ofthe problem, we don’t change the answer:·1 23 4¸·x1x2¸=·56¸↔·3 41 2¸·x1x2¸=·65¸So, to keep out of trouble, we choose the pivot element at stage j to be thelargest magnitude element among those remaining in column j on or below themain diagonal.This algorithm is calledGauss Elimination with (column) pivoting.Unquiz: Try it on our example.0 4 44 0 22 0 2x1x2x3=420with solution x = [1, 2, −1]T.Implementation5We can implement this algorithm by• overwriting A with L and U.• keeping track of the pivot interchanges in an index array.Some things to know about Gauss Elimination• Operations count: n3/3 multiplications.• Matlab’sbackslash operator solves linear systems, using LU, withoutforming the inverse:x = A \ b;• Matlab also has an lu function that returns the L and U factors.[L,U] = lu(A);y = L \ b;x = U \ y;The matrix U is upper triangular, and L is a permutation of a lowertriangular matrix.• If you have k right-hand sides involving the same matrix, store them ascolumns in a matrix B of size n × k and then solve using, for exampleX = A \ B;or[L,U] = lu(A);for i=1:k,y = L \ B(:,i);X(:,i) = U \ y;endWhat about sparsity?If A has lots of zeros, we would like our algorithms to take advantage of this,and not to ruin the structure by introducing many nonzeros.If A is initialized as asparse matrix in Matlab, then backslash and the lualgorithm both try to preserve sparsity.6AnalysisHow good are our answers?Suppose we solve a linear system on the computer. How do we measure howgood our answer is?Due to round-off error, we probably do not compute the exact answer xtrue. Wewould like our computed x to be close to xtrue, but since xtrueis unknown, wecan’t measure the distance. Instead, we settle for computing a bound on thedistance.A motivating exampleExample 1: Suppose δ < .5 ∗ ǫmach.·δ 11 1¸·x1x2¸=·10¸If we solve this system without pivoting, we’ll get·δ 10 −1/δ¸·x1x2¸=·1−1/δ¸sox2= 1, x1= 0.The true solution isx =·−11−δ11−δ¸,so our answer is very bad. This is because we used anunstable algorithm: Gausselimination without pivoting.If we use pivoting, our answer improves: the linear system·δ 11 1¸·x1x2¸=·10¸is rewritten as·1 1δ 1¸·x1x2¸=·01¸so the elimination gives us·1 10 1¸·x1x2¸=·01¸sox2= 1, x1= −1 .7This is quite close to the true solution!In fact, we can calculate the residual, the remainder that we get when wesubstitute

View Full Document