Engineering Analysis ENG 3420 Fall 2009 Dan C Marinescu Office HEC 439 B Office hours Tu Th 11 00 12 00 Lecture 13 Last time Problem solving in preparation for the quiz Linear Algebra Concepts Vector Spaces Linear Independence Orthogonal Vectors Bases Matrices Today Solving systems of linear equations Chapter 9 Graphical methods Next Time Gauss elimination Lecture 13 2 Solving systems of linear equations Matrices provide a concise notation for representing and solving simultaneous linear equations a11 x1 a12 x 2 a13 x 3 b1 a21 x1 a22 x 2 a23 x 3 b2 a31 x1 a32 x 2 a33 x 3 b3 a11 a21 a31 a12 a22 a32 a13 x1 b1 a23 x 2 b2 a33 x 3 b3 A x b Solving systems of linear equations in Matlab Two ways to solve systems of linear algebraic equations A x b Left division x A b Matrix inversion x inv A b Matrix inversion only works for square non singular systems it is less efficient than left division Solving graphically systems of linear equations For small sets of simultaneous equations graphing them and determining the location of the intersection of the straight line representing each equation provides a solution There is no guarantee that one can find the solution of system of linear equations a No solution exists b Infinite solutions exist c System is ill conditioned Determinant of the square matrix A aij a1 1 a 2 1 A aij an 1 1 an 1 a1 2 a2 2 an 1 2 an 2 a1 n 1 a2 n 1 an 1 n 1 an n 1 a1 n a2 n an 1 n an n A det A Ai1ai1 Ai 2 ai 2 Ain ain Here the coefficient Aij of aij is called the cofactor of A A cofactor is a polynomial in the remaining rows of A and can be described as the partial derivative of A The cofactor polynomial contains only entries from an n 1 x n 1 matrix Mij called a minor obtained from A by eliminating row i and column j Determinants of several matrices Determinants for 1x1 2x2 3x3 matrices are 1 1 a11 2 2 3 3 a11 a21 a21 a12 a22 a31 a32 a11 a11 a12 a11a22 a12 a21 a22 a13 a22 a23 a21 a23 a21 a22 a23 a11 a12 a13 a32 a33 a31 a33 a31 a32 a33 Determinants for square matrices larger than 3 x 3 are more complicated Properties of the determinants If we permute two rows of the rectangular matrix A then the sign of the determinant det A changes The determinant of the transpose of a matrix A is equal to the determinant of the original matrix If two rows of A are identical then A 0 Cramer s Rule Consider the system of linear equations A x b Each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the vector b consisting of constants b1 b2 bn Example of the Cramer s Rule Find x2 in the following system of equations 0 3x1 0 52x 2 x 3 0 01 0 5x1 x 2 1 9x 3 0 67 0 1x1 0 3x 2 0 5x 3 0 44 Find the determinant D 0 3 0 52 1 D 0 5 0 1 1 0 3 1 1 9 0 5 1 9 0 5 1 1 9 0 3 0 52 1 0 0022 0 3 0 5 0 1 0 5 0 1 0 4 0 5 Find determinant D2 by replacing D s second column with b 0 3 0 01 D2 0 5 0 67 1 1 9 0 3 0 1 0 44 0 5 Divide 0 0649 D x2 2 29 5 D 0 0022 0 67 1 9 0 5 1 9 0 5 0 67 0 01 1 0 0649 0 44 0 5 0 1 0 5 0 1 0 44 Gauss Elimination Gauss elimination a sequential process of removing unknowns from equations using forward elimination followed by back substitution Na ve Gauss elimination the process does not check for potential problems resulting from division by zero Na ve Gauss Elimination cont Forward elimination Starting with the first row add or subtract multiples of that row to eliminate the first coefficient from the second row and beyond Continue this process with the second row to remove the second coefficient from the third row and beyond Stop when an upper triangular matrix remains Back substitution Starting with the last row solve for the unknown then substitute that value into the next highest row Because of the upper triangular nature of the matrix each row will contain only one more unknown function x GaussNaive A b ExA A b m n size A q size b if m n fprintf Error input matrix is not square n 3 0f m 3 0f n n m End if n q fprintf Error vector b has a different dimension than n q 2 0f n q end n1 n 1 for k 1 n 1 for i k 1 n factor ExA i k ExA k k ExA i k n1 ExA i k n1 factor ExA k k n1 End End x zeros n 1 x n ExA n n1 ExA n n for i n 1 1 1 x i ExA i n1 ExA i i 1 n x i 1 n ExA i i end C 150 100 0 100 150 50 0 50 50 C 150 100 0 100 150 50 0 50 50 d 588 6 686 7 784 8 d 588 6000 686 7000 784 8000 x GaussNaive C d x 41 2020 55 9170 71 6130 A 1 1 1 0 0 0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 0 10 10 0 15 5 5 10 0 20 0 0 A 1 1 1 0 0 0 0 1 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 1 0 10 10 0 15 5 5 10 0 20 0 0 b 0 0 0 0 0 200 b b b 0 0 0 0 0 200 x GaussNaive A b x NaN NaN NaN NaN NaN NaN x A b x 6 1538 4 6154 1 5385 6 1538 1 5385 1 5385 x inv A b x 6 1538 4 6154 1 5385 6 1538 1 5385 1 5385 Complexity of Gauss elimination To solve an n x n system of linear equations by Gauss elimination we carry out the following number of operations 2n 3 Forward O n 2 Elimination 3 Back 2 O n n Substitution 2n 3 Total O n 2 3 Flops floating point operations Mflops sec number of floating point operation executed by a processor per second Conclusions As the system gets larger the computation time increases greatly Most of the effort is incurred in …
View Full Document