DOC PREVIEW
UT CS 395T - Landmark mapping

This preview shows page 1-2-3 out of 8 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 8 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Lecture 8:Landmark MappingCS 395T: Intelligent RoboticsBenjamin KuipersLandmark Map• Locations and uncertainties of n landmarks,with respect to a specific frame of reference.– World frame: fixed origin point– Robot frame: origin at the robot• Problem: how to combine new informationwith old to update the map.A Spatial Relationship is a Vector• A spatial relationship holds between twoposes: the position and orientation of one, inthe frame of reference of the other.! xAB=xAByAB"AB# $ % % % & ' ( ( ( ! xAB= xAByAB"AB[ ]TABUncertain Spatial Relationships• An uncertain spatial relationship is describedby a probability distribution of vectors, with amean and a covariance matrix.! ˆ x = E[x]C(x) = E[(x "ˆ x )(x "ˆ x )T]! ˆ x =ˆ x ˆ y ˆ " # $ % % % & ' ( ( ( ! C(x) ="x2"xy"x#"xy"y2"y#"x#"y#"#2$ % & & & ' ( ) ) ) A Map with n Landmarks• Concatenate n vectors into one big state vector• And one big 3n×3n covariance matrix. ! x =x1x2Mxn" # $ $ $ $ % & ' ' ' ' ! ˆ x =ˆ x 1ˆ x 2Mˆ x n" # $ $ $ $ % & ' ' ' ' ! C(x) =C(x1) C(x1,x2) L C(x1,xn)C(x2,x1) C(x2) L C(x2,xn)M M O MC(xn,x1) C(xn,x2) L C(xn)" # $ $ $ $ % & ' ' ' ' Example• The robot senses object #1.• The robot moves.• The robot senses a different object #2.• Now the robot senses object #1 again.• After each step, what does the robot know(in its landmark map) about each object,including itself?2Robot Senses Object #1 and MovesRobot Senses Object #2Robot Senses Object #1 AgainUpdated Estimates After ConstraintCompounding! xAB= xAByAB"AB[ ]T! xBC= xBCyBC"BC[ ]TCompounding! xAB= xAByAB"AB[ ]T! xBC= xBCyBC"BC[ ]T! xAC= xACyAC"AC[ ]T3Rotation Matrix! cos"#sin"sin"cos"$ % & ' ( ) xGyG$ % & ' ( ) =xByB$ % & ' ( ) ! "BGCompounding• Let xAB be the pose of object B in the frameof reference of A. (Sometimes written BA.)• Given xAB and xBC, calculate xAC.• Compute C(xAC) from C(xAB), C(xBC), andC(xAB,xBC). (See the paper for how.)! xAC= xAB" xBC=xBCcos#AB$ yBCsin#AB+ xABxBCsin#AB+ yBCcos#AB+ yAB#AB+#BC% & ' ' ' ( ) * * * Computing Covariance• Apply this to nonlinear functions by usingTaylor Series.! y = Mx + bC(y) = C(Mx + b)= E [(Mx + b " (Mˆ x + b)) (Mx + b " (Mˆ x + b))T]= E [M(x "ˆ x ) (M(x "ˆ x ))T]= E [M (x "ˆ x )(x "ˆ x )TMT]= M E [(x "ˆ x )(x "ˆ x )T]MT= MC(x) MT• Consider the linear mapping y = Mx+bInverse Relationship! xAB= xAByAB"AB[ ]T! xBA= xBAyBA"BA[ ]TThe Inverse Relationship• Let xAB be the pose of object B in the frameof reference of A.• Given xAB, calculate xBA.• Compute C(xBA) from C(xAB)! xBA= (")xAB="xABcos#AB" yABsin#ABxABsin#AB" yABcos#AB"#AB$ % & & & ' ( ) ) ) Composite Relationships• Compounding combines relationships head-to-tail: xAC = xAB ⊕ xBC• Tail-to-tail combinations come fromobserving two things from the same point:xBC = (xAB) ⊕ xAC• Head-to-head combinations come from twoobservations of the same thing:xAC = xAB ⊕ (xCB)• They provide new relationships betweentheir endpoints.4Merging Information• An uncertain observation of a pose iscombined with previous knowledge usingthe extended Kalman filter.– Previous knowledge:– New observation: zk, R• Update: x = x(new) ⊗ x(old)• Can integrate dynamics as well.! xk", Pk"EKF Update Equations• Predictor step:• Kalman gain:• Corrector step:! ˆ x k"= f (ˆ x k "1,uk)! Pk"= APk "1AT+ Q! Kk= Pk"HT(HPk"HT+ R)"1! ˆ x k=ˆ x k"+ Kk(zk" h (ˆ x k"))! Pk= (I" KkH)Pk"Reversing and Compounding• OBJ1R = (ROBOTW) ⊕ OBJ1W• = WORLDR ⊕ OBJ1WSensing Object #2• OBJ2W = ROBOTW ⊕ OBJ2RObserving Object #1 Again• OBJ1W = ROBOTW ⊕ OBJ1RCombining Observations (1)• OBJ1W = OBJ1W(new) ⊗ OBJ1W(old)• OBJ1R = OBJ1R(new) ⊗ OBJ1R(old)5Combining Observations (2)• ROBOTW(new) = OBJ1W ⊕ (OBJ1R)• ROBOTW = ROBOTW(new) ⊗ ROBOTW(old)Useful for Feature-Based Maps• Smith & Cheeseman, 1986; Smith, Self &Cheeseman, 1990 developed the methodswe’ve just seen.• Leonard & Durrant-Whyte [1992]demonstrated excellent landmark mappingwith sonar.• FastSLAM [Montemerlo & Thrun, 2002]combines it with particle filtering.Landmark-Based Mapping• Suppose the environment consists of a setof isolated landmarks:– Trees in the forest– Rocks in the Martian desert• Treat a landmark as a point location (xk,yk).• SLAM: the robot learns the locations ofthe landmarks while localizing itself.“Martian” Rocky DesertThe real Martian desertLandmarks vs Occupancy Grids• An occupancy grid makes no assumptionabout types of features.– Now we assume point landmarks, but walls andother types of features are also possible.• An occupancy grid (typically) has fixedresolution.– Feature models can be arbitrarily precise.• An occupancy grid takes space and time thesize of the environment to be mapped.– A feature-based map takes space and timereflecting the contents of the environment.6Kalman Filters for Features• A landmark feature has parameters (xi,yi).– The robot’s pose has parameters (x,y,ϕ).– Robot plus K landmarks needs 2K+3 parameters.• To estimate these with a Kalman filter:– The means require 2K+3 parameters.– The covariance matrix needs (2K+3)2.• FastSLAM observes that the landmarks areindependent, given the robot’s pose.– A 2×2 covariance matrix for each landmark.– Total parameters: K(2 + 2×2) + 3Landmark Poses are IndependentGiven the Robot’s PoseBayesian Model• Robot poses: st = s1, s2, . . . st.– (Slightly different from our usual notation.)• Robot actions: ut = u1, u2, . . . ut• Observations: zt = z1, z2, . . . zt• Landmarks: Θ = θ1, . . . θk• Correspondence: nt = n1, n2, . . . nt– nt = k means zt is an observation of θk– Assume (without loss of generality) thatlandmarks are observed one at a time.The SLAM Problem• Estimate p(st, Θ | zt, ut, nt) using– action model: p(st | ut, st-1)– sensor model: p(zt | st, Θ, nt)• Independence lets us factor– trajectory estimation p(st | zt, ut, nt)– from landmark estimation p(θk | st, zt, ut, nt)! p(st," | zt,ut,nt)= p(st| zt,ut,nt) p(#k| st,zt,ut,nt)k$Factor the Uncertainty• Rao-Blackwellized particle filters.• Use particle filters to represent thedistribution over trajectories p(st | zt, ut, nt)– M particles• Within each particle, use Kalman filters torepresent distribution for each landmarkpose p(θk | st, zt, ut, nt)– K Kalman filters per particle• Each update requires O(MK) time.– Easy to improve to O(M log K).Balanced Tree of


View Full Document

UT CS 395T - Landmark mapping

Documents in this Course
TERRA

TERRA

23 pages

OpenCL

OpenCL

15 pages

Byzantine

Byzantine

32 pages

Load more
Download Landmark mapping
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Landmark mapping and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Landmark mapping 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?