MIT 9 29 - Introduction to Dynamical Systems

Unformatted text preview:

Introduction to Dynamical SystemsJustin Werfel9.29 Optional Lecture #7, 4/10/031 Linear dynamical systemsThe term dynamical system just refers to a system that changes in time. We describethe system as being in some state x, which is in general a vector; and we’re interestedin how that state changes as time goes on. (If x has n dimensions, we call it an n-dimensional or nth-order system.) The simplest case is the following:˙x = AxSo we can think of x as the position in what may be a high-dimensional vector space;then ˙x, which describes how the system will move around in that vector space, is givenby some linear combination (given by A) of the coordinates of that position.Let’s take, as an example, the two-dimensional system x = [x1x2]T. We can draw,for every point in the plane, an arrow originating from that point whose magnitude anddirection corresponds to ˙x. (We can do that in MATLAB with the quiver command.)That will give us an idea of how the system will behave. Suppose A is0 1−1 0.(Code:>> [x,y]=meshgrid(-2.5:.25:2.5);>> A=[0 -1; 1 0];>> u=A(1,1)*x+A(1,2)*y; v=A(2,1)*x+A(2,2)*y;>> quiver(x,y,u,v)>> axis([-2 2 -2 2])>> axis square)Then you can see from the figure on the next page that starting at any point in theplane, the system will tend to circle clockwise; and the speed | ˙x| will be greater if thedistance from the center is greater, which makes sense because the velocity is just alinear transformation of the position.The most general case for a linear dynamical system (LDS) has the following form:dxdt= A(t)x(t) + B(t)u(t)dydt= C(t)x(t) + D(t)u(t)1−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2−1.5−1−0.500.511.52x1x2Here t, of course, is time; x, again, is the state of the system; u is a control variable,some input you have into the system that can affect its behavior; y is the output ofthe system, some kind of readout which is a transformation of the state x and input ubut doesn’t affect the time evolution of the system. Again, x, u, and y are in generalvectors, so A, B, C, D (which all have names in control theory) are in general matrices.This is the continuous-time case; in the discrete-time case, rather than these beingdifferential equations, the left-hand side of the equations are x(t + 1) and y(t + 1).In most cases, A, B, C, D are time-invariant. Often there’s no control input u, inwhich case the system is called autonomous. And if there’s no particular output of thesystem that we care about distinct from its input, we get the case we started with, acontinuous-time autonomous time-invariant LDS:˙x = AxNow, the time derivatives here only go up to first order. What if we wanted to studya case with higher-order time derivatives? Let’s take everyone’s favorite example, theundamped pendulum. The equation of motion is¨θ +gLsin(θ) = 0. This is actually amore complicated case for two reasons: the second derivative with respect to time, andthe nonlinearity of the sine function. We’ll get to the nonlinear part later; for now, let’sget rid of that complicationby making the usual small-angle approximation, sin(θ) ≈ θfor θ  1. Now what we do is definea new state variable x ≡ [θ˙θ]T. Then ˙x = [˙θ¨θ]T,and we can write ˙x =0 1−g/L 0x. We can always get rid of higher-order timederivatives by increasing the dimensionality of the system. Notice also that exceptfor a scale factor g/L, the matrix here is the same as the one in the example above;the pendulum swinging back and forth corresponds to a circular motion in the phase2plane. Along the positive x-axis, angle is at a maximum, the rate of change of positionis 0, and the rate of change of velocity is greatest in the direction of negative values;later, when the system reaches the negative y-axis, angle is 0, velocity is maximallynegative, rate of change of angle is greatest and negative, rate of change of velocity iszero; etc. All this can be read directly off the graph (in this case which is sufficientlylow-dimensional that we can graph it).One key feature of the system we can look at is its fixed points, those points x∗instate space where ˙x = 0. If the system is at a fixed point, it won’t move from there;hence the name. Since ˙x = Ax, that means Ax = 0, so the fixed points are those in thenullspace of A. With linear systems, you’ll either have a single fixed point at x = 0 (ifA is nonsingular), or an infinity of fixed points along a hyperplane passing through theorigin (if A is singular).An important question is that of the stability of fixed points. If the system is ex-actly at a fixed point, it won’t change; but what happens if it’s very close to a fixedpoint—where will it go? The standard framework for approaching this question comesfrom the Russian mathematician Alexandr Mikhailovich Lyapunov, who investigatednonlinear stability analysis in the late 1800s; the following definitions are due to him.1.1 Lyapunov stabilityThe following all assume the fixed point x∗under consideration is at 0. For arbitraryx∗, |x| should be replaced by |x − x∗| in the below definitions.• A fixed point x∗is Lyapunovstable if, when the system starts sufficiently close tox∗, it will stay arbitrarily close to it for all time thereafter. The formal definitionis: ∀ R > 0 ∃ r > 0 s.t. |x(0)| < r ⇒ |x(t)| < R ∀ t ≥ 0.• If a fixed point is not stable in this sense, it is unstable. That is, there is at leastone spherical neighborhood around the fixed point such that you can’t get thesystem to stay within it forever, no matter how close you start it.• A fixed point is attracting if, when you start sufficiently close to it, the systemwill converge to the fixed point as t → ∞. Formally, ∃ r > 0 s.t. |x(0)| <r ⇒ limt→∞x(t) = x∗. Note that in nonlinear systems, a fixed point can beattracting without being stable in the Lyapunovsense. For instance, in the system˙θ = 1 − cos θ, the system will always go to θ = 0 at infinite time, but you canstart it with θ as small and positive as you want and it’ll go on this extendedexcursion first, going to the fixed point the long way around; it won’t stay withina small ball around 0.• If a fixed point is Lyapunov stable but not attracting, it is called marginally orneutrally stable. For instance, a ball sitting on a table: it’s at a fixed point, youcan displace it a tiny bit, and it won’t return to the original fixed point nor will itmove away from it.• If a fixed point is both Lyapunov


View Full Document

MIT 9 29 - Introduction to Dynamical Systems

Download Introduction to Dynamical Systems
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Introduction to Dynamical Systems and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Introduction to Dynamical Systems 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?