**Unformatted text preview:**

DiagonalizabilityGeneral square caseThe Symmetric case, the Schur decompositionNormsVector NormsMatrix NormsDiagonalizabilityNormsLecture 3: IntroductionAmos RonUniversity of Wisconsin - MadisonJanuary 29, 2021Amos Ron CS513, remote learning S21DiagonalizabilityNormsOutline1DiagonalizabilityGeneral square caseThe Symmetric case, the Schur decomposition2NormsVector NormsMatrix NormsAmos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionOutline1DiagonalizabilityGeneral square caseThe Symmetric case, the Schur decomposition2NormsVector NormsMatrix NormsAmos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionBlank pageAmos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionDiagonalizabilityDefinition: DiagonalizabilityA m × m is diagonalizable is there exists a basis for Cmmade ofe-vectors of AAmos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionDiagonalizabilityTheorem:A is square. TFCAE:1A is diagonalizable2There exist a matrix P and a diagonal matrix D such thatA = PDP−1.3Another equivalent condition deferred.Amos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionProof of Theorem(2) =⇒ (1): We have AP = PD. We prove that each column ofP is an eigenvector of A. This proves (1), since the columns ofany m × m invertible matrix form a basis for Cm.The jth column of P is Pej. Now:A(Pej) = (AP)ej= (PD)(ej) = P(Dej) = P(D(j, j)ej) = D(j, j)(Pej).So, (D(j, j), Pej) is an eigenpair of A.Amos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionProof of Theorem(1) =⇒ (2) We are given m eigenpairs (λj, vj), with (v1, . . . , vm)a basis for Cm. Let P be the matrix whose columns arev1, . . . , vj, and let D be the diagonal matrix whose diagonal isλ1, . . . , λm. We show that A = PDP−1by showing that AP = PD,i.e., by showing that, for every j,(AP)ej= (PD)ej.Now,(AP)ej= A(Pej) = Avj= λjvj= P(λjej) = P(Dej) = (PD)ej.Amos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionThe symmetric caseReminder: A is symmetric whenever A = A0.Theorem: Spectral rudiments of a symmetric matrixAssume A = A0. Then:σ(A) ⊂ R.A is diagonalizable.There is an A-eigenbasis which is also an orthonormalbasis.The Schur Decomposition: A is orthogonallydiagonalizable:A = QDQ0= QDQ−1,with Q orthogonal and D diagonal.Amos Ron CS513, remote learning S21DiagonalizabilityNormsGeneral square caseThe Symmetric case, the Schur decompositionDemo #1Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsOutline1DiagonalizabilityGeneral square caseThe Symmetric case, the Schur decomposition2NormsVector NormsMatrix NormsAmos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of NormDefinition: NormLet|| · ||be an assignment from RmtoR+:= {c ∈ R | c ≥ 0} :Rm3 v 7→ ||v|| ∈ R+.This assignment is a norm if the following conditions are valid:||v|| = 0 if and only if v = 0.For c ∈ R, v ∈ Rm, we have ||cv|| = |c|||v||.For v, w ∈ Rm, ||v + w|| ≤ ||v|| + ||w||.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of NormExample: The 1-norm, mean-norm, `1-norm...||v||1:=mXi=1|v(i)|.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of NormExample: The 2-norm, Euclidean-norm, `2-norm, the leastsquare norm...||v||2:=vuutmXi=1|v(i)|2.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of NormExample: The ∞-norm, max-norm, `∞-norm, uniform norm...||v||∞:= max1≤i≤m|v(i)|.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of NormExample: The p-norm, `p-norm, 1 ≤ p < ∞||v||p:= mXi=1|v(i)|p!1/p.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of matrix normsA is m × n, maps thus Rnto Rm.We choose a norm, || · ||, for the domain,and a norm || · ||0for the range.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of matrix normsA is m × n, maps thus Rnto Rm.We choose a norm, || · ||, for the domain,and a norm || · ||0for the range.Definition: Matrix norm||A|| := max{||Av||0||v||: v 6= 0} = max{||Av||0: ||v|| = 1}.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsDefinition of matrix normsDefinition: Matrix norm||A|| := max{||Av||0||v||: v 6= 0} = max{||Av||0: ||v|| = 1}.If the norms || · || and || · ||0are both p-norms for the same p, wedenote the matrix norm as ||A||p.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsThe 1-norm of a matrixTheorem: computing ||A||1Let Am×nwith columns a1, . . . , an. Then||A||1= max1≤i≤n||ai||1Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsThe 1-norm of a matrixTheorem: computing ||A||1Let Am×nwith columns a1, . . . , an. Then||A||1= max1≤i≤n||ai||1=: XProof: We need to show that ||A||1≤ X, and ||A||1≥ X.First, for any 1 ≤ j ≤ m, ||ej||1= 1, therefore||aj||1= ||Aej||1≤ ||A||1.Therefore,X ≤ ||A||1.Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsThe 1-norm of a matrixTheorem: computing ||A||1Let Am×nwith columns a1, . . . , an. Then||A||1= max1≤i≤n||ai||1=: XNow, let v ∈ Rn, ||v||1. Then||Av||1= ||nXi=1v(i)ai||1≤nXi=1||v(i)ai||1=nXi=1|v(i)|||ai||1Amos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsThe 1-norm of a matrixNow, let v ∈ Rn, ||v||1. Then||Av||1= ||nXi=1v(i)ai||1≤nXi=1||v(i)ai||1=nXi=1|v(i)|||ai||1≤nXi=1|v(i)|XAmos Ron CS513, remote learning S21DiagonalizabilityNormsVector NormsMatrix NormsThe 1-norm of a matrixNow, let v ∈ Rn, ||v||1. Then||Av||1= ||nXi=1v(i)ai||1≤nXi=1||v(i)ai||1=nXi=1|v(i)|||ai||1≤nXi=1|v(i)|X= XnXi=1|v(i)| = X||v||1= X.Therefore, ||A||1≤ X.Amos Ron CS513, remote learning

View Full Document