CMU 15494 Cognitive Robotics - World Maps and Localization

Unformatted text preview:

02/18/08 15-494 Cognitive Robotics 1World Maps and Localization15-494 Cognitive RoboticsDavid S. Touretzky &Ethan Tira-ThompsonCarnegie MellonSpring 200802/18/08 15-494 Cognitive Robotics 2Frames of Referencecameragroundworld●Camera frame: what the robot sees.●projectToGround() = kinematics + planar world assumption.●Local map assembled from camera frames each projected to ground; robot moves head but not body.●World map assembled from local maps built at different spots in the environment.local02/18/08 15-494 Cognitive Robotics 3Four Shape Spaces●camShS = camera space●groundShS = camera shapes projected to ground plane●localShS = body-centered (egocentric space);constructed by matching and importing shapesfrom groundShS●worldShS = world space (allocentric space);constructed by matching and importing shapesfrom localShS●The robot is explicitly represented in worldShS02/18/08 15-494 Cognitive Robotics 4Deriving the Local Map1) MapBuilder extracts shapes from the camera frame–Use a request of type MapBuilderRequest::cameraMap if you just want camera space shapes.2) MapBuilder does projectToGround()–Use MapBuilderRequest::groundMap if you only want ground shapes from the current camera frame.3) MapBuilder matches ground shapes against local shapes.–Request type should be MapBuilderRequest::localMap4) MapBuilder moves to the next gaze point and repeats.–The world is assumed not to change during this process.02/18/08 15-494 Cognitive Robotics 5Deriving the World Map●The local map covers only what the robot can see from a single viewing position.●The world map can cover much larger territory.–Use MapBuilderRequest::worldMap●The world map persists over a long time period.–The world will change. Updates must be possible.●We update the world map by:–Constructing a local map.–Aligning it with the world map (by translation and rotation)–Importing shapes from the local map.–Noting additions and deletions since the last local map match02/18/08 15-494 Cognitive Robotics 6Localization●How do we align the local map with the world map?●This turns out to be equivalent to determining our position and orientation on the world map.●Tricky, because:–The local map is noisy–The environment can be ambiguous (multiple pink landmarks)●Sensor model: describes the uncertainty in our sensor measurements.–Can mix sensor types (vision, IR), info types (bearing, distance)02/18/08 15-494 Cognitive Robotics 7SLAM●Simultaneous Localization and Mapping●When is this necessary?–When we don't know the map in advance.–When the world is changing (landmarks can appear or disappear, or change location.)–When we're moving through the world.●How do we localize on a map that we are still in the process of building?●Motion model: estimates (by odometry) our motion through the environment.02/18/08 15-494 Cognitive Robotics 8Particle Filtering●A technique for searching large, complex spaces.●What is the hypothesis space we need to search?–Robot's position (x,y)–Robot's orientation –Which world space shapes have disappeared since last update?–What new shapes have appeared in local space?●Each particle encodes a point in the hypothesis space.●How can we evaluate hypotheses? –Use sensor and motion models to update particle weights02/18/08 15-494 Cognitive Robotics 9Ranking a Particle: 1-D CaseLocal mapWorld mapHypothesis: dx = 18Local mapPoormatch02/18/08 15-494 Cognitive Robotics 10Ranking a Particle: 1-D CaseLocal mapWorld mapHypothesis: dx = 56Local mapGoodmatch02/18/08 15-494 Cognitive Robotics 11Matching a LandmarkGaussian probability distribution: a sensor modelWorldLocal02/18/08 15-494 Cognitive Robotics 12Pick the Best CandidateLocal mapWorld mapHypothesis: dx = 56Local mapGoodmatchMatch each local landmark against the closest world landmark of the same type and color. Score with a gaussian.02/18/08 15-494 Cognitive Robotics 13Matching a Set of LandmarksGx , x0 = exp[−x−x022]Ps∈L ,t∈W∣h =GL.sh , W.tPs∈L∣W , h=maxt∈WPs∈L , t∈W∣hPh =∏s∈LPs∣W , h●Take the product of the match probabilities of the individual landmarks:●Allow penalty terms for addition, deletion.L.s = coordinate of shape s in Local mapW.t = coordinate of shape t in World maph = location hypothesis02/18/08 15-494 Cognitive Robotics 14Addition Penalty●A shape in the local map that isn't in the world map must be accounted for as an addition.●Assess a penalty on P(h) for each addition, but remove that shape from the product term for P(h) so the product doesn't go to zero.World mapLocal map02/18/08 15-494 Cognitive Robotics 15Deletion Penalty●A shape in the world map that should be visible in the local map but isn't must be accounted for as a deletion.●Assess a penalty on P(h) for each deletion, but remove that shape from the product term for P(h) so the product doesn't go to zero.World mapLocal map02/18/08 15-494 Cognitive Robotics 16What Shapes Should be Visible?●Take bounding box of shapes in local space.●All shapes within that box should be visible in world space.Local mapWorld map02/18/08 15-494 Cognitive Robotics 17When Objects Move●If an object moves only a little bit, it will still match, and the position will be updated.●If an object moves by a larger amount, we'll get:–An object deletion at the old location–An object addition at the new location●Could watch for this and combine both changes into a single “move” penalty.●If h is a poor hypothesis, then every object will appear to have “moved”.02/18/08 15-494 Cognitive Robotics 18Importance Sampling●For each particle h, calculate the probability P(h)●Create a new generation of particles by resampling from the previous population:–Particles with high probability should be more likely to be sampled, and will therefore multiply.–Particles with low probability likely won't be sampled, and will therefore probably die out.●The new particle's parameters are “jiggled” a little bit. This is how we search the space.●Repeat this resampling process for several generations.02/18/08 15-494 Cognitive Robotics 19Jiggling a Particle●Perturb the translation term (x, y)●Perturb the orientation term ●Flip the state of an “addition” bit: one bit for each local shape–A value of 1 means this is a new addition to the world.●Flip the state of a “deletion” bit: one bit for each


View Full Document

CMU 15494 Cognitive Robotics - World Maps and Localization

Download World Maps and Localization
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view World Maps and Localization and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view World Maps and Localization 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?