Inverse Kinematics (part 2)Forward KinematicsForward & Inverse KinematicsWelman, 1993Gradient DescentGradient Descent for f(x)=gMinimizationTaking Safe StepsInverse of the DerivativeGradient Descent AlgorithmJacobiansSlide 12Jacobian Inverse KinematicsSlide 14Slide 15Slide 16Slide 17Jacobian for a 2D Robot ArmJacobian MatricesIncremental Change in PoseIncremental Change in EffectorIncremental Change in eIncremental ChangesChoosing ΔeBasic Jacobian IK TechniqueComputing the JacobianComputing the Jacobian Matrix1-DOF Rotational JointsSlide 29Rotational DOFsSlide 313-DOF Rotational JointsSlide 33Slide 34Slide 35Quaternion JointsSlide 37Slide 38Translational DOFsSlide 40Slide 41Slide 42Building the JacobianUnits & ScalingSlide 45End Effector OrientationSlide 47Scaled Rotation Axis6-DOF End EffectorDesired Change in OrientationEnd EffectorSlide 52Slide 53Slide 54Inverting the Jacobian MatrixInverting the JacobianUnderconstrained SystemsOverconstrained SystemsWell-Constrained SystemsPseudo-InverseDegenerate CasesSingle Value DecompositionJacobian TransposeSlide 64Iterating to the SolutionIterationWhen to StopFinding a Successful SolutionLocal MinimaTaking Too LongIteration SteppingOther IK IssuesJoint LimitsHigher Order ApproximationRepeatabilityMultiple End EffectorsMultiple ChainsGeometric ConstraintsOther IK TechniquesJacobian Method as a Black BoxInverse Kinematics (part 2)CSE169: Computer AnimationInstructor: Steve RotenbergUCSD, Winter 2004Forward KinematicsWe will use the vector:to represent the array of M joint DOF valuesWe will also use the vector:to represent an array of N DOFs that describe the end effector in world space. For example, if our end effector is a full joint with orientation, e would contain 6 DOFs: 3 translations and 3 rotations. If we were only concerned with the end effector position, e would just contain the 3 translations. M...21Φ Neee ...21eForward & Inverse KinematicsThe forward kinematic function computes the world space end effector DOFs from the joint DOFs:The goal of inverse kinematics is to compute the vector of joint DOFs that will cause the end effector to reach some desired goal state Φe f eΦ1 fWelman, 1993“Inverse Kinematics and Geometric Constraints for Articulated Figure Manipulation”, Chris Welman, 1993Masters thesis on IK algorithmsExamines Jacobian methods and Cyclic Coordinate Descent (CCD)Please read sections 1-4 (about 40 pages)Gradient DescentWe want to find the value of x that causes f(x) to equal some goal value gWe will start at some value x0 and keep taking small steps:xi+1 = xi + Δxuntil we find a value xN that satisfies f(xN)=gFor each step, we try to choose a value of Δx that will bring us closer to our goalWe can use the derivative to approximate the function nearby, and use this information to move ‘downhill’ towards the goalGradient Descent for f(x)=gf-axisx-axisxif(xi)df/dxgxi+1MinimizationIf f(xi)-g is not 0, the value of f(xi)-g can be thought of as an error. The goal of gradient descent is to minimize this error, and so we can refer to it as a minimization algorithmEach step Δx we take results in the function changing its value. We will call this change Δf.Ideally, we could have Δf = g-f(xi). In other words, we want to take a step Δx that causes Δf to cancel out the errorMore realistically, we will just hope that each step will bring us closer, and we can eventually stop when we get ‘close enough’This iterative process involving approximations is consistent with many numerical algorithmsTaking Safe StepsSometimes, we are dealing with non-smooth functions with varying derivativesTherefore, our simple linear approximation is not very reliable for large values of ΔxThere are many approaches to choosing a more appropriate (smaller) step sizeOne simple modification is to add a parameter β to scale our step (0≤ β ≤1) 1dxdfxfgxiInverse of the DerivativeBy the way, for scalar derivatives:dfdxdxdfdxdf11Gradient Descent Algorithm } newat evaluate // along step // take1slope compute/ / { whileat evaluate // valuestarting initial11110000iiiiiiiiinxfxffxsfgxxxdxdfsgfxfxffxJacobiansA Jacobian is a vector derivative with respect to another vectorIf we have a vector valued function of a vector of variables f(x), the Jacobian is a matrix of partial derivatives- one partial derivative for each combination of components of the vectorsThe Jacobian matrix contains all of the information necessary to relate a change in any component of x to a change in any component of fThe Jacobian is usually written as J(f,x), but you can really just think of it as df/dxJacobians NMMNxfxfxfxfxfxfxfddJ...........................,1221212111xfxfJacobian Inverse KinematicsJacobiansLet’s say we have a simple 2D robot arm with two 1-DOF rotational joints:φ1φ2• e=[ex ey]JacobiansThe Jacobian matrix J(e,Φ) shows how each component of e varies with respect to each joint angle 2121,yyxxeeeeJ ΦeJacobiansConsider what would happen if we increased φ1 by a small amount. What would happen to e ?φ1•111yxeeeJacobiansWhat if we increased φ2 by a small amount?φ2•222yxeeeJacobian for a 2D Robot Armφ2•φ1 2121,yyxxeeeeJ ΦeJacobian MatricesJust as a scalar derivative df/dx of a function f(x) can vary over the domain of possible values for x, the Jacobian matrix J(e,Φ) varies over the domain of all possible poses for ΦFor any given joint pose vector Φ, we can explicitly compute the individual components of the Jacobian matrixIncremental Change in PoseLets say we have a vector ΔΦ that represents a small change in joint DOF valuesWe can approximate what the resulting change in e would be: ΦJΦΦeΦΦee ,JddIncremental Change in EffectorWhat if we wanted to move the end effector by a
View Full Document