CDS110a - Wednesday 27 October 2004Note: there was indeed a unit problem with the reachability matrix in last lecture; seeFAQ.1. Quick review of reachability and pole-placement2. State feedback vs. output feedback; note that state feedback is often not thenatural design paradigma. Note that stabilization often does not require full knowledge of state,but state steering and output tracking generally willb. Example - stabilize the origin for a simple harmonic oscillator -clearly sufficient just to know velocity for dampingc. Example - steering in the predator-prey system, e.g., would seem torequire full state knowledge3. Can state be determined from output signal?a. Multiple output with C a square matrix - invertible?b. Multiple or single output with C rectangular - sometimes still okay ifK has low "support" (just a remark)c. Look at an example with C not invertible, but where intuitively youshould be okay4. State estimation by derivatives, as in readingsa. Observability matrix and observabilityb. Note problems with this approach in finite precision5. State observer with innovationsa. Deriveb. Show that this reduces to a pole-placement problemc. Same observability criterion as above...d. Mention duality with controllability scenario6. Output feedback; theorem on pole-placement with an observerA quick review of reachabilityLast time we saw that a linear systemx Ax Buis reachable if the reachability matrixBABA2B An−1Bhas full rank. This means that for any initial state x0 x0, desired final state xfand‘target time’ T it is possible to find a control input ut, t ∈0,Tthat steers the systemto reach xT xf. If a system is reachable, it is furthermore possible to solve the poleplacement problem, in which we want to design a s tate feedback law1u Kxsuch that we can pick any eigenvalues we want for the controlled dynamicsx Ax BuA BKx.Here A and B are given, and we must find a K to achieve the desired eigenvalues forA BK.Before moving on, let’s look at a simple example (from Åström and Murray, Ex.5.3) of a system that is not reachable:ddtx1x2 −1001x1x211u.Here we can easily computeWr11−1−1,which clearly has determinant zero. This can be understood by noting the complete‘symmetry’ of the way that u modifies the evolution of x1and x2. For example, ifx10 x20there is no way to use u to achieve x1T≠ x2Tat any later time.State feedback versus output feedbackNote that in our discussion of stabilization and pole-placement so far, we haveassumed that it makes sense to design a control law of the formu Kx.This is called a ‘state feedback’ law since in order to determine the control input utattime t, we generally need to have full knowledge of the state xt. In practice this isoften not possible, and thus we usually specify the available output signals whendefining a control design problem:x Ax Bu,y Cx.Here the output signal yt, which can in principle be a vector of any dimension,represents the information about the evolving system state that is made available tothe controller via sensors. An ‘output feedback’ law must take the formut fy ≤ t,where, in general, we can allow utto depend on the entire history of ywith ≤ t(more on this below and later in the course). Output feedback is a natural setting forpractical applications. For example, if we are talking about cruise control for anautomobile, x may represent a complex set of variables having to do with the internalstate of the engine, wheels and chassis while y is only a readout from thespeedometer. Hopefully it will seem natural that it is usually prohibitively difficult toinstall a sensor to monitor every coordinate of the system’s state space, and also that2it will often be unnecessary to do so (cruise control electronics can function quite wellwith just the car’s speed).One simple example of a system in which full state knowledge is clearly notnecessary is stabilization of a simple harmonic oscillator. If the natural dynamics of theplant ismx −kx,and our actuation mechanism is to apply forces directly on the mass, then the controlsystem looks likeddtx1x201−km0x1x201u,(where x1is now the position and x2the velocity). We can clearly stabilize theequilibrium point at the origin by the feedback lawu −bx20 −bx1x2,which makes the overall equation of motionx1 −kmx1− bx1,which we recognize as a damped harmonic oscillator. Thus it is clear that thecontroller only needs to know the velocity of the oscillator in order to implement asuccessful feedback strategy. So even if we go to a SISO output feedback formulationof this problem,x Ax Bu,y Cx,we are obviously fine for any C of the form ≠ 0C 0 ,since x2 y/ and we can implement an output-feedback law of the formu −bx2 −by.In contrast to this, imagine a steering problem for the predator-prey system that wetalked about in the last lecture. Suppose for instance we want to design a controllerthat will take the fox and rabbit population from an arbitrary initial state at t 0 tosome specific final state such as36,51at time t T. Even if we restrict our attentionto the immediate vicinity of the natural equilibrium point, and assume that a linearizedmodel is sufficient for the design, it seems quite unlikely that we could succeed withoutrequiring knowledge of both the fox and rabbit populations at time t 0.Clearly, if C is a square matrix and y has the same dimension as x, everything willbe easy if C is invertible. As a generalization of what we did for the simple harmonicoscillator above, we could just design a state feedback controller K, setx C−1y,and apply feedback3u Kx KC−1y.However this is a special case and not the sort of convenience we want to count on!State estimationAt this point it might seem like we would need completely new theorems aboutreachability and pole-placement for output-feedback laws, when utis only allowed todepend on y t. However, it turns out that we can build naturally on our previousresults by appealing to a separation method. The basic idea is that we will try toconstruct a procedure for processing the data y tto obtain an estimate xtof thetrue system state xt, and then apply a
View Full Document