DOC PREVIEW
Berkeley COMPSCI 287 - Control 1: Feedforward, feedback

This preview shows page 1-2-3-4-5-6 out of 18 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 18 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Page 1CS 287: Advanced RoboticsFall 2009Lecture 2: Control 1: Feedforward, feedback, PID, Lyapunov direct methodPieter AbbeelUC Berkeley EECS Office hours: Thursdays 2-3pm + by email arrangement, 746 SDH SDH 7thfloor should be unlocked during office hours on Thursdays Questions about last lecture?AnnouncementsPage 2 Control Estimation Manipulation/Grasping Reinforcement Learning Misc. Topics Case StudiesCS 287 Advanced Robotics Overarching goal: Understand what makes control problems hard What techniques do we have available to tackle the hard (and the easy) problems Any applicability of control outside robotics? Yes, many! Process industry, feedback in nature, networks and computing systems, economics, …  [See, e.g., Chapter 1 of Astrom and Murray, http://www.cds.caltech.edu/~murray/amwiki/Main_Page, for more details---_optional_ reading. Fwiw: Astrom and Murray is a great read on mostly classical feedback control and is freely available at above link.] We will not have time to study these application areas within CS287 [except for perhaps in your final project!]Control in CS287Page 3 Feedforward vs. feedback PID (Proportional Integral Derivative) Lyapunov direct method --- a method that can be helpful in proving guarantees about controllers Reading materials: Astrom and Murray, 10.3 Tedrake, 1.2  Optional: Slotine and Li, Example 3.21.Today’s lectureBased on a survey of over eleven thousand controllers in the refining, chemicals and pulp and paper industries, 97% of regulatory controllers utilize PID feedback.L. Desborough and R. Miller, 2002 [DM02]. [Quote from Astrom and Murray, 2009] Practical result: can build a trajectory controller for a fully actuated robot arm Our abstraction: torque control input to motor, read out angle [in practice: voltages and encoder values]Today’s lecturePage 4Intermezzo: Unconventional (?) robot arm useSingle link manipulator (aka the simple pendulum)θmguI¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)lI=ml2Page 5Single link manipulatorI¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)θmgulHow to hold arm at θ = 45 degrees?Single link manipulatorSimulation results:I¨θ(t) + c˙θ(t) + mgl sin θ(t) = u(t), u = mgl sinπ4θ(0) =π4,˙θ(0) =0θ(0) = 0,˙θ(0) = 0Can we do better than this?The Matlab code that generated all discussed simulations will be posted on www.Page 6Feedforward controlI¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)θmgulHow to make arm follow a trajectory θ*(t) ? θ(0) = 0,˙θ(0) = 0u(t) = I¨θ∗(t) + c˙θ∗(t) + mgl sin θ∗(t)Feedforward controlSimulation results:I¨θ(t) + c˙θ(t) + mgl sin θ(t) = u(t)Can we do better than this?θ(0) = 0,˙θ(0) = 0u(t) = I¨θ∗(t) + c˙θ∗(t) + mgl sin θ∗(t)Page 7 Thus far: n DOF manipulator: standard manipulator equations H : “inertial matrix,” full rank  B : identity matrix if every joint is actuated  Given trajectory q(t), can readily solve for feedforward controls u(t) for all times tn DOF (degrees of freedom) manipulator?I¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)θmgulH(q)¨q + C(q, ˙q) + G(q) = B(q)u A system is fully actuated when in a certain state (q,\dot{q},t) if, when in that state, it can be controlled to instantaneously accelerate in any direction.  Many systems of interest are of the form: Defn. Fully actuated: A control system described by Eqn. (1) is fully-actuated in state (q,\dot{q},t) if it is able to command an instantaneous acceleration in an arbitrary direction in q: Defn. Underactuated: A control system described by Eqn. (1) is underactuated in configuration (q,\dot{q},t) if it is not able to command an instantaneous acceleration in an arbitrary direction in q:[See also, Tedrake, Section 1.2.]Fully-Actuated vs. Underactuated¨q = f1(q, ˙q, t) + f2(q, ˙q, t)u (1)rankf2(q, ˙q, t) = dimqrankf2(q, ˙q, t) < dimqPage 8Fully-Actuated vs. Underactuated¨q = f1(q, ˙q, t) + f2(q, ˙q, t)u fully actuated in (q, ˙q, t) iff rankf2(q, ˙q, t) = dimq. Hence, for any fully actuated system, we can follow a trajectory by simply solving for u(t): [We can also transform it into a linear system through a change of variables from u to v:The literature on control for linear systems is very extensive and hence this can be useful. This is an example of feedback linearization. More on this in future lectures.]u(t) = f−12(q, ˙q, t) (¨q−f1(q, ˙q, t))u(t) = f−12(q, ˙q, t) (v(t)−f1(q, ˙q, t))¨q(t) = v(t) n DOF manipulator All joints actuated  rank(B) = n  fully actuated Only p < n joints actuated  rank(B) = p  underactuatedFully-Actuated vs. Underactuated¨q = f1(q, ˙q, t) + f2(q, ˙q, t)u fully actuated in (q, ˙q, t) iff rankf2(q, ˙q, t) = dimq.H(q)¨q + C(q, ˙q) + G(q) = B(q)uf2= H−1B, H full rank, B = I, hence rank(H−1B) = rank(B)Page 9 Car Cart-poleExample underactuated systems Acrobot HelicopterFully actuated systems: is our feedforward control solution sufficient in practice?I¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)θmgulTask: hold arm at 45 degrees. What if parameters off? --- by 5%, 10%, 20%, … What is the effect of perturbations?Page 10Fully actuated systems: is our feedforward control solution sufficient in practice?I¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)θmgulTask: hold arm at 45 degrees. Mass off by 10%: steady-state errorFully actuated systems: is our feedforward control solution sufficient in practice?I¨θ(t) + b˙θ(t) + mgl sin θ(t) = u(t)θmgulTask: swing arm up to 180 degrees and hold there Perturbation after 1sec: Does **not** recover[θ = 180 is an “unstable” equilibrium point]Page 11 Feedback can provide Robustness to model errors However, still: Overshoot issues --- ignoring momentum/velocity! Steady-state error --- simply crank up the gain?Proportional controlu(t) = ufeedforward(t)Task: hold arm at 45 degreesu(t) = ufeedforward(t) + Kp(qdesired(t)−q(t))Proportional controlu(t) = ufeedforward(t) + Kp(qdesired(t)−q(t))Task: swing arm up to 180degrees and hold thereu(t) = ufeedforward(t)Page 12Current status Feedback can provide Robustness to model errors Stabilization around states which are unstable in open-loop Overshoot issues --- ignoring momentum/velocity! Steady-state error --- simply crank up the gain?PD controlu(t) = Kp(qdesired(t)−q(t)) + Kd( ˙qdesired−˙q(t))Page 13Eliminate steady state error by


View Full Document
Download Control 1: Feedforward, feedback
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Control 1: Feedforward, feedback and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Control 1: Feedforward, feedback 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?