New version page

# UW-Madison CS 513 - Lecture 17: LU-factorization, continued

Pages: 18
Documents in this Course

## This preview shows page 1-2-3-4-5-6 out of 18 pages.

View Full Document
Do you want full access? Go Premium and unlock all 18 pages.
Do you want full access? Go Premium and unlock all 18 pages.
Do you want full access? Go Premium and unlock all 18 pages.
Do you want full access? Go Premium and unlock all 18 pages.
Do you want full access? Go Premium and unlock all 18 pages.
Do you want full access? Go Premium and unlock all 18 pages.

Unformatted text preview:

Application of LU-factorizationComputing determinantsDetermening Positive definitenessStability of LUPivotingApplication of LU-factorizationStability of LULecture 17: LU-factorization, continuedAmos RonUniversity of Wisconsin - MadisonMarch 19, 2021Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUOutline1Application of LU-factorizationComputing determinantsDetermening Positive definiteness2Stability of LUPivotingAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUBlank pageAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessOutline1Application of LU-factorizationComputing determinantsDetermening Positive definiteness2Stability of LUPivotingAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessDeterminant of triangular matrices, and generalmatricesAssume A is triangular (upper, lower), then:The eigenvalues of A are the diagonal entries of A.det(A) =Qmi=1A(i, i).Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessDeterminant of triangular matrices, and generalmatricesAssume A is triangular (upper, lower), then:The eigenvalues of A are the diagonal entries of A.det(A) =Qmi=1A(i, i).Computing det(A) for a general square matrixOption 1: Directly from the definition. Complexity: O(m!) -hopeless.Option 2: Using the multiplication theorem:det(BC) = det(B) det(C).Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessDeterminant of triangular matrices, and generalmatricesComputing det(A) for a general square matrixOption 1: Directly from the definition. Complexity: O(m!) -hopeless.Option 2: Using the multiplication theorem:det(BC) = det(B) det(C).Algorithm for computing det(A)LU-factor A.det(A) =Qmi=1U(i, i) (since det(L) = 1).Complexity: O(m3).Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessThe three characterizations of positive definitenessTheorem: Let A be square symmetric, invertible, with LDUfactorization. TFCAE:A is SPD.σ(A) > 0.All the main principal minor are positive.The diagonal entries of D in A = LDU are all positive.Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessThe three characterizations of positive definitenessTheorem: Let A be square symmetric, invertible, with LDUfactorization. TFCAE:A is SPD.σ(A) > 0.All the main principal minor are positive.The diagonal entries of D in A = LDU are all positive.Proof: We only prove that the last condition is equivalent to therest.Since A is symmetric,A = U0DU.Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessThe three characterizations of positive definitenessProof: We only prove that the last condition is equivalent to therest.Since A is symmetric,A = U0DU.So, let v 6= 0.(Av, v) = ((U0DU)v, v) = (DUv, Uv) = (Dw, w).With w = Uv. Since U is intertible, and v 6= 0, we have w 6= 0.If D(i, i) > 0, ∀i, then (Dw, w) > 0 (easy, and we already arguedthat before). Then A is SPD.If D(i, i) ≤ 0 for some i, then we can choose w := ei(i.e., wechoose v := U−1ei).Then (Dw, w) = ...Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessThe three characterizations of positive definitenessProof: We only prove that the last condition is equivalent to therest.Since A is symmetric,A = U0DU.So, let v 6= 0.(Av, v) = ((U0DU)v, v) = (DUv, Uv) = (Dw, w).With w = Uv. Since U is intertible, and v 6= 0, we have w 6= 0.If D(i, i) > 0, ∀i, then (Dw, w) > 0 (easy, and we already arguedthat before). Then A is SPD.If D(i, i) ≤ 0 for some i, then we can choose w := ei(i.e., wechoose v := U−1ei).Then (Dei, ei) = D(i, i) ≤ 0, and therefore A is not SPD.Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUComputing determinantsDetermening Positive definitenessBlank pageAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUPivotingOutline1Application of LU-factorizationComputing determinantsDetermening Positive definiteness2Stability of LUPivotingAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUPivotingA 2 × 2 examplesConsider the matrix A −11 0where  is small.When  = 0, A is orthogonal, hence cond(A) = 1. Therefore, forsmall , we havecond(A) ≈ 1.Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUPivotingA 2 × 2 examplesConsider the matrix A −11 0where  is small.When  = 0, A is orthogonal, hence cond(A) = 1. Therefore, forsmall , we havecond(A) ≈ 1.However, the LU-factorization of A isL =1 0−11U = −10 −1Then:cond(L) ≈ −1, cond(U) ≈ −2.So, the factorization is unstableAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUPivotingA 2 × 2 examplesWhat to do?The problem is that in the algorithm we definev1(2 : m) = A(2 : m, 1)/A(1, 1),and the pivot A(1, 1) is small compared to other entries of A,hence v1contains large entries. This automatically makes Lill-conditioned.Key: Do not use pivots that are small relative to other entries.Amos Ron CS513, remote learning S21Application of LU-factorizationStability of LUPivotingA 2 × 2 examplesKey: Do not use pivots that are small relative to other entries.Solution: Assuming that we are allowed to shuffle the order ofthe rows/columns we do so.Shuffling both rows and columns: full pivotingShuffling only rows: partial pivotingAmos Ron CS513, remote learning S21Application of LU-factorizationStability of LUPivotingHow to perform the pivoting?At each step: The input matrix Aj−1has already undergonesome pivoting, i.e., some row and columns were shuffled.In the j0th, step, we need to create the vectorvj(j : m) = Aj−1(j : m, j)/Aj−1(j, j).So, we want to bring to the (j, j)-location a large entry.We can choose a row among the j, . . . m rows of Aj−1.And, if we do full pivoting, we can choose a column from thej, . . . , m columns of Aj−1Amos Ron CS513, remote learning

View Full Document Unlocking...