Study guide for the second midtermMath 5485, Fall 20081. Basic ideas (Chapter 1)(a) Order of convergence(b) Floating point numbers systems, arithmetic and roundoff error(c) Well-conditioned versus ill-conditioned2. Rootfinding of scalar equations (Chapter 2)(a) Basic ideas• Multiplicity of roots(b) Bisection method, False position, Newton’s method, Secant Methodi. Requirements to guarantee convergenceii. Order of convergence (including requirements to achieve this)iii. Compute a few iterations and check convergenceiv. Given word problem, formulate root problem and find root to tolerance(c) Fixed point iteration in generali. Requirements for existence of fixed pointii. Requirements to guarantee convergence fixed point iteration schemeiii. Conditions that determine order of convergenceiv. Compute a few iterations and check convergencev. Appropriate stopping conditions(d) Accelerating convergencei. Aitken’s ∆2-Method and Steffensen’s MethodA. When they applyB. How well they accelerateii. Restoring quadratic convergence to Newton’s method(e) Roots of polynomiali. Polynomial deflationii. Laguerre’s method3. Systems of equations (Chapter 3)(a) Basic linear algebra (such as in section 3.0)(b) Gaussian eliminationi. Row operationsii. Operation count (and why better than Gauss-Jordan or multiplying by in-verse)1iii. Partial pivoting and scaled partial pivoting(c) LU decompositioni. Via Gaussian eliminationii. Via direct factorization• Note that we did not cover how to do pivoting here, but in general, it isnecessary.iii. Know what special matrices don’t require pivoting strategies.iv. Cholesky decomposition• Special case of direct factorization for symmetric positive definite matricesv. Factorization of tridiagonal matrices(d) Norms, error estimates, and condition numbersi. Understand and be able to calculate l2and l∞vector and matrix norms.ii. Predict error estimates from condition number.(e) Iterative methodsi. Condition on iteration matrix for convergence.ii. Understand when iterative methods may outperform direct methods.iii. Basic ideas of Jacobi, Gauss-Seidel, and SOR method• Don’t worry about their convergence properties.(f) Newton’s method for nonlinear systems of equationsi. How to use itii. Why it’s slow4. Eigenvalues and eigenvectors(a) Gerschgorin Circle Theorem(b) Power methodi. Why it works in generalii. How to calculate it (nonsymmetric and symmetric)iii. Don’t worry about detailed conditions for which it works(c) Inverse power methodi. How it follows from power methodii. Use it with Gerschgorin Circle Theorem or to find smallest eigenvalue.(d) Deflationi. How to transform matrix to remove eigenvalue• Effect of this transformation on eigenvectors and other eigenvaluesii. Wielandt Deflation and Hotelling Deflationiii. Problems with using deflation to compute all
View Full Document