DOC PREVIEW
MIT 18 034 - Lecture Notes

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

18.03(4) at ESG Spring, 2003A Brief Guide to H&S Chapter 5(Notational note: H&S use “Det” for the determinant function, while most others,myself included, use “det”. The distinction is subtle and not worth worry. Also,“det” is easier to typeset.)Our story thus far . . .Our current goal is to solve the systemx′= A x , ♣where A is a constant n ×n matrix and x ∈ Rn. We will assume the results of theEUT (Chapter 8). For the purposes of Chapters 3, 4 and 5, we’ll consider n = 2for specific examples, although it should be kept in mind that the results can beextended to higher n.Whatever technique we use to solve ♣, we will at some point have t o find theeigenvalues and eigenvectors of A, and keep track of them. [Disclaimer: It mayoften be possible to solve ♣ wit hout explicit calculation of eigenvectors, but theresulting algebra is i dentical.] So, we’ll assume that we have obtained and solvedthe characteristi c polynomialdet (A − λ I) = p( λ) = 0,and identified as many eigenvectors as we can. For n = 2, solving p(λ) = 0 ishigh-school alg ebra, but we know the results of Appendix II, the FundamentalTheorem of Algebra, so we know that once we’ve used our high-school algebra,we’ve found all of the roots of the characteristic polynomial. N ow, we use oursolutions.Chapter 3: p (λ1) = p (λ2) = 0, λ1, λ2∈ R, λ16= λ2.It’s easy to show that the corresponding eigenvectors f1and f2are linearlyindependent (Theorem 1, Pages 45-46), and so there exi sts a similarity transformB = Q A Q−11such that B is diagonal, with the eigenvalues as the diagonal elements. Theorem 2on Page 46 gives a constructive proof of this, namely show ing that if the columnsof P−1are the eigenvectors of A, then B a s given above has the desired form.Recall, however, that t he proof of this theorem has a killer typo; see the notesDiagonalization for Matrices with Real, Distinct Eigenvalues.So, what’s the big whoop? Introduce a coordinate tra nsformation y = Q x ,and note thatx′= A Q−1Q x , Q x′= Q A Q−1Q x , so that y′= B y.But, since B is diagonal, the last of the above stri ng of equations can be solved fory =eλ1t00 eλ2t...y0= diageλ1t, eλ2t, . . .y0= eB ty0.In the above, all off-diagonal elements in the displayed matrix are to be t a ken aszeros, and in the l ast step, the use of eB tis suggestive; the precise notation is alarge part of what is done in Chapter 5. Our overall result is thenx = Q−1eB tQ x0.Chapter 4: p (λ1) = p (λ2) = 0, λ1, λ2∈ C, λ16= λ2.The fact is, the same techniques used in Chapter 3 for solving ♣ will work here,and much of the chapter is devoted to showing that everything we know about a realvector space can be extended to a complex vector space. However, there are otherways, and the text emphasizes the fact that we can solve real systems without resortto complex simila r matrices. My opinion is that the crucial parts of t he chapter aresummarized in the figure on Page 56 (Chapter 3, of all places) and Theorem 3 ofSection 3.2. For detailed comments and an example regarding the latter, see thenotes Similarity Transformations with Complex Eigenvalues. So, given a2 × 2 matrix A with distinct complex eigenvalues, we can either:(1) Use a similarity transform as in Chapter 3, using the complex eigenvectors forthe columns of Q−1, resulting in a dia gonal matrix, and recognizing that thematrix Q−1diag {λ1, λ2} Q will be real. We did this explicitl y only for thespecial case G =0 −11 0. For a more invo lved case, see the Addendum,the last page of these notes.2(2) Use a similarity transform as given in the Corollary on Page 68, and outli nedin the notes Similarity Transformations with Complex Eigenvalues,resulting in a matrix o f the forma −bb a, where the eigenvalues are a ± i b,a and b real.Chapter 5: Any possible solutions of p(λ) = 0.We have already seen that series formulations of the exponential function, ex=Pxn/n!, where the sum is from n = 0 to ∞, gives consistent results when thevariable x is replaced by the purely imaginary i θ or the matrix G t, with G the“generator of rota tions,” defined above (and elsewhere). The results were foundto beeiθ= cos θ + i sin θ, exp (Gt) =cos t −sin tsin t cos t≡ R(t).It has been hinted in passing, and could be shown with little difficulty, that for adiagonal matrixB =λ100 λ2..., exp (B t) =eλ1t00 eλ2t....We have also show n from the sum that for a and b real scalars,expa −bb at= eatR(bt).So Chapter 5 begins by formalizing this procedure when the argument of theexponentia l function is a matri x. To rega rd the infinite sum of matrices as anythingother than a formal expression (B&C would call this a “formal sum”), we need toprove that the limit exists, and we only know how to find limits of scalars. So, weneed to somehow decide on the “size” of a matrix.We have already gone over Sections 1 & 2, where norms are “ rev iewed” andalternate norms exhibited for special cases. Section 2 is often not done, but it shouldbe; any norm used for matrices might be suspect, but knowing that it doesn’t matterfor the purpose of showing convergence should give some comfort. For all of thePropositions in this section, the propositio ns should be read a nd understood, andthe proofs should be at least skimmed, but the details of the proofs need not bememorized.Section 5.3 should be read as carefully as possible. The uniform norm, definedat the bottom of Page 82, is used for many linear operators other than matrices.3There are some operators (but not on finite-dimensional spaces) for which this normis not bounded, and such operators need to be watched carefully. I would say thatthis is one circumstance where the proofs to Lemma 1 should be read carefullyand understood. Note that the previous work in Section 1 means that the pro ofof the Theorem at the bottom of Page 83 comes swiftly.Lemma 2 is crucial, but the proof is a bit opaque. See the following page fora possibly helpful interpretation.The Proposition on Pages 84-85 is crucial, but the proof of (d) is really wimpy.We’ve already done a better one in class, by explicit calculatio n. You should readand understand the remainder of the section.Section 5.4 is more o r less a recapitulation of everything done so far, withfigures, and it should all be understood. If you see a similarity between what’s donein this section and what’s done in


View Full Document

MIT 18 034 - Lecture Notes

Documents in this Course
Exam 3

Exam 3

9 pages

Exam 1

Exam 1

6 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?