DOC PREVIEW
MIT 16 412J - Lecture Notes

This preview shows page 1-2-16-17-18-33-34 out of 34 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 34 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Introduction to SLAMPart II Paul RobertsonReview• Localization– Tracking, Global Localization, Kidnapping Problem.• Kalman Filter– Quadratic– Linear (unless EKF)•SLAM– Loop closing– Scaling:• Partition space into overlapping regions, use rerouting algorithm.• Not Talked About– Features– Exploration2Outline• Topological Maps•HMM•SIFT• Vision Based Localization3Topological MapsIdea:Build a qualitative map where the nodes are similar sensor signatures and transitions between nodes are control actions.4Advantages of Topological maps• Can solve the Global Location Problem.• Can solve the Kidnapping Problem.• Human-like maps• Supports Metric Localization• Can represent as a Hidden Markov Model (HMM)5Hidden Markov Models (HMM)Scenario– You have your domain represented as set of state variables.– The states define what following state are reachable from any given state.– State transitions involve action.– Actions are observable, states are not.– You want to be able to make sense of a sequence of actionsExamplesPart-of-speech tagging, natural language parsing, speech recognition, scene analysis, Location/Path estimation.6Overview of HMMWhat a Hidden Markov Model isAlgorithm for finding the most likely state sequence.Algorithm for finding the probability of an action sequence (sum over all allowable state paths).Algorithm for training a HMM.Only works for problems whose state structure can be characterized as FSM in which a single action at a time is used to transition between states.Very popular because algorithms are linear on the length of the action sequence.7Hidden Markov Models0.5s1s2s3s4s5s6“Mary” “Had”“A”“Little” “Lamb”“A”s8“.”s7“Curry” “.”“And”“Big” “Dog”“And”“Hot”0.4 0.4“Roger”0.3“Ordered”0.30.50.50.40.10.50.50.50.30.40.30.5“Cooked”0.3“John”0.3A finite state machine with probabilities on the arcs.<s1,S,W,E> where S={s1,s2,s3,s4,s5,s6,s7,s8}; W={“Roger”, …}; E={<transition> …}Transition <s2,s3,”had”,0.3> 3.0)(3""2=→ ssPhadS1: Mary had a little Lamb and a big dog.S2: Roger ordered a lamb curry and a hot dog. S3: John cooked a hot dog curry.P(S3)=0.3*0.3*0.5*0.5*0.3*0.5=0.0033758Finding the most likely path)|(maxarg)(1,1,1,1−=ttswsPttσViterbi Algorithm: For an action sequence of length t-1 finds:in linear time.ab“0” 0.3“1” 0.1“0” 0.2“1” 0.1“0” 0.2“1” 0.5“0” 0.4“1” 0.2Viterbi Algorithm:For each state extend the most probable state sequence that ends in that state.Statesε1 11 111 1110Sequence a aa aaa aaaa abbbaProbability 1.0 0.2 0.04 0.008 0.005Sequence b ab abb abbb abbbbProbability 0.0 0.1 0.05 0.025 0.005ba“1110”9Action Sequence Probabilities ∑=+==σ11,1,1),()(iinnnsSwPwPLet αi(t) be the probability P(w1,t-1,St=si) so ∑=+=σα1,1)1()(iinnwP⎩⎨⎧→→==00.11)1(otherwiseiiα(Must start in the start state).∑=→=+σαα1)()()1(ijwiijssPttt10HMM forward probabilities11123 4 5ε11 1 0αa(t)1.0 0.2 0.05 0.017 0.0148αb(t)0.0 0.1 0.07 0.04 0.0131P(w1,t) 1.0 0.3 0.12 0.057 0.0279t0.2*0.1=0.02+0.1*0.5=0.05ab“0” 0.3“1” 0.1“0” 0.2“1” 0.1“0” 0.2“1” 0.5“0” 0.4“1” 0.2“1110”HMM Training (Baum-Welch Algorithm)Given a training sequence, adjusts the HMM state transition probabilities to make the action sequence as likely as possible.a“0”“1”“2”Training Sequence: 01010210a 8“0” 4“1” 3“2” 1a“0” 0.5“1” 0.375“2” 0.12512With Hidden StatesIntuitively…a“0”“0”cb0.70.3When counting transitionsProrate transitions by their Probability.1. Guess a set of transition probabilities.2. (while (improving) (propagate-training-sequences))“improving” is calculated by comparingthe cross-entropy after each iteration. When the cross-entropy decreases by lessthan ε in an iteration we are done.Cross entropy is:∑−−nwnMnMwPwPn,1)(log)(1,12,11? But you don’t know thetransition probabilities!13Scale Invariant Feature TransformDavid Lowe ‘Distinctive Image Features from Scale-Invariant Keypoints’ IJCV 2004.Stages:– Scale Space (Witkin ‘83) Extrema Extraction– Keypoint Pruning and Localization– Orientation Assignment– Keypoint Descriptor14Scale space in SIFTMotivation: – Objects can be recognized at many levels of detail – Large distances correspond to low l.o.d. – Different kinds of information are available at each levelIdea: Extract information content from an image at each l.o.d. Detail reduction done by Gaussian blurring:– I(x, y) is input image. L(x, y, σ) is rep. at scale σ.– G(x, y, σ) is 2D Gaussian with variance σ2 – L(x, y, σ) = G(x, y, σ) * I(x, y) – D(x, y, σ) = L(x, y, k σ) − L(x, y, σ)15Features of SIFTInvariant to:ScalePlanar RotationContrastIlluminationLarge numbers of features16Difference of Gaussians17Scale Space• Compute local extrema of D• Each (x, y, σ) is a feature.• (x, y) scale and planar rotation invariant.18Pruning for Stability• Remove feature candidates that– Low Contrast– Unstable Edge Responses19Orientation AssignmentFor each feature (x, y, σ): – Find fixed-pixel-area patch in L(x, y, σ) around (x, y) – Compute gradient histogram; call this bi– For biwithin 80% of max, make feature (x, y, σ, bi)20Vision Based SLAMReadings:Se, S., D. Lowe and J. Little, ‘Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks’, The International Journal of Robotics Research, Volume 21 Issue 08. Kosecka, J. Zhou, L. Barber, P. Duric, Z. ‘Qualitative Image Based Localization in Indoor Environments’ CVPR 2003.22Predictive Vision-Based SLAM1. Compute SIFT features from current location.2. Use Stereo to locate features in 3D.3. Move4. Predict new location based on odometry and Kalman Filter.5. Predict location of SIFT features based upon motion of robot.6. Find SIFT features and find 3D position of each.7. Compute position estimate from each matched feature.23Vision Based Localization• Acquire video sequence during the exploration of new environment.• Build environment model in terms of locations and spatial relationships between them.• Topological localization by means of location recognition.• Metric localization by computing relative pose of current view and representation of most likely location.24Same Location?25Global Topology, Local GeometryIssues:1. Representation of individual locations2. Learning the representative


View Full Document

MIT 16 412J - Lecture Notes

Documents in this Course
Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?