DOC PREVIEW
MIT 10 34 - Review for Exam 2

This preview shows page 1-2 out of 6 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 6 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Review for Exam 2 Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note that this has not been proofread fully, so it may have typos or other things slightly wrong with it, but it’s a good guide towards what you should know about. Finally, be familiar with what is in your book so that if there’s something on the exam that you don’t know well (e.g., LU decomposition from the last exam), you at least know where to look for it. Chapter 4: IVPs Objective: develop iterative rules for updating trajectory with the objective of getting numerical solution to equal to the integration of the exact solution Converting higher-order differential equations into systems of coupled first-order differential equations. What is quadrature? Numerical integration will involve ‘integrating’ a polynomial Æ that is why polynomial interpolation is important Polynomial interpolation: Polynomial interpolation is accomplished by definining N support point which the polynomial will equal the function at. We define the polynomial: NNoxaxaxaaxp ++++= ...)(221 such that )(...)(221 jNjNjjojxfxaxaxaaxp =++++= for j = 0,1,2,…N You can solve this like way back in the day with a linear system. But let’s find a better way of doing this. Lagrange interpolation For N support points, Lagrange interpolation is the sum of N lagrange polynomials, each of which are of order x N-1, designed to fit closely (exactly?) at a specified support point. Now that we have a sum of polynomials or a single polynomial, we can integrate this numerically, using trapezoidal rule, Simpson’s rule etc. Integration: Newton-Cotes Æ uniformly spaced pointsDifferent methods: trapezoid rule, simpson’s rule ()()[]()()[]0101246bababafxdx f fba2fxdx f f f−≈+−≈+∫∫+ with errors ()()34Ob aOb a−− How to get more accurate? Break into smaller segments before making too many more points How to integrate in two dimensions? Make a rectangle containing all points, and multiply the function you are integrating by an “indicator” function that says whether you are in the area to be integrated (indicator = 1) or not (indicator = 0) This all relates to time integrals Linear ODE systems and dynamic stability: Linear system, begin with a known ode problem x_dot = Ax. You can expand the exponential solution to analyze the results Looking at a linear system which gives rise to a discussion on stability and behavior of solution. Nonlinear ODE systems: Same type of anlaysis, now we use a Taylor expansion, invoking the Jacobian. Time marching ODE-IVP solvers: Explicit methods: generates a new state by taking a time-step based on the function value at the old time-step. Explicit Euler method Uses Taylor approximation, gives first order error globally (second order local, but O(delta t) time steps, so O(delta t) error) [] [] []()1kk kxxtfx+=+Δ⋅ Runge-Kutta methods Still explicit, but uses midpoints Second-order RK: [] []()[] [] [][] [] []()()1231212kkkkktfxktfxk1xxkOt+=Δ ⋅⎛=Δ ⋅ +⎜⎝⎠=++Δ⎞⎟ where the global error is (delta t)^2\ Can get higher order ones. RK 4th-order is basis of ode45 4th order RK Æ you just have to know the value of the function at the current state and a time step and you can generate the new state. How does ode45 get estimate of error? Use 5 function evaluations to get 5th order accurate, also gets 4th order accurate, compare two to see how accurate you are Æ then know if need to make delta_t smaller or biggerImplicit methods: update rule depends on previous values of x at time in the immediate past. Use of interpolation to approximate an update. Why is it difficult to use x[k+1] to get your derivative? Because that is nonlinear, requires a difficult solution Is this more accurate than explicit in all cases? No, same order-accuracy for explicit Euler vs. implicit Euler Are these methods used? Yes, quite frequently. Why or why not? Because they are more stable to larger time steps for stiff equations… they won’t “blow up”. Basic implicit Euler: [] [] []()11kk kxxtfx++=+Δ⋅ Others, like Crank-Nicholson: [] [] []()[]()1112kk k kxxtfx fx++⎡⎤=+Δ⋅ +⎣⎦ Predictor/corrector method: Use explicit method to generate an initial guess Then use Newton to iteratively find real solution to implicit equation Predictor/corrector method ought to converge well for Newton? Yes, because guess should be relatively near correct value DAE systems: What are they? ()(),0,yFyzGyz== where y are variables in ODEs, z are other variables Mass matrix: indicates where the differential parts are Is mass matrix singular? If it’s a DAE, yes. Otherwise, it’s just an ODE system. What conditions do we need to solve this using an ODE solver? DAE system to be of index one: the determinant of the Jacobian of the nonlinear equalities must be nonzero. So that leaves us with: 0TTGGyzyz∂∂+∂∂= Stability of systems’ steady states: Take Jacobian Evaluate at steady state Find eigenvalues If real part of all is < 0, stable If <=0, quasi-stable (indifferent to perturbations in some direction) If any > 0, unstable Don’t be confused by stability of integrators and stability of a system’s steady state. The integrator creates a “system” based on its solution method, and we evaluate that system’s stability. Global vs. Local error Global vs. Local error Global vs. Local errorWhich ODE integration routines are more stable? Explicit RK vs. Implicit Euler Which will have less error in a non-stiff system? Explicit RK vs. Implicit Euler Stiff/not stiff systems: Eigenvalues of Jacobian Definition Conceptual (different time scales) Condition number: indication of number of steps needed to integrate properly When to use ode15s vs. ode45? Condition number >>1 or not When will you know things will be stiff/not stiff? Single ODE equation: not stiff Discretized PDEs: stiff What to do if not sure? Try ode45, if fails, use 15s Ways to speed things up in Matlab? Return Jacobian of system of ODEs at the supplied point. Chapter 5: Optimization Use the gradient of the cost function to find the downhill direction you may want to look in Use the Hessian of the gradient to figure out how close to that gradient direction you want to


View Full Document

MIT 10 34 - Review for Exam 2

Documents in this Course
Load more
Download Review for Exam 2
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Review for Exam 2 and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Review for Exam 2 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?