DOC PREVIEW
MIT 16 412J - Mobile Robot Localization and Mapping

This preview shows page 1-2-23-24 out of 24 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 24 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Stephen SeMD Robotics9445 Airport RoadBrampton, Ontario L6S 4J3, [email protected] LoweJim LittleDepartment of Computer ScienceUniversity of British ColumbiaVancouver, B.C. V6T 1Z4, [email protected]@cs.ubc.caMobile RobotLocalization andMapping withUncertainty usingScale-InvariantVisual LandmarksAbstractA key component of a mobile robot system is the ability to localizeitself accurately and, simultaneously, to build a map of the environ-ment. Most of the existing algorithms are based on laser range find-ers, sonar sensors or artificial landmarks. In this paper, we describea vision-based mobile robot localization and mapping algorithm,which uses scale-invariant image features as natural landmarks inunmodified environments. The invariance of these features to imagetranslation, scaling and rotation makes them suitable landmarks formobile robot localization and map building. With our Triclops stereovision system, these landmarks are localized and robot ego-motion isestimated by least-squares minimization of the matched landmarks.Feature viewpoint variation and occlusion are taken into account bymaintaining a view direction for each landmark. Experiments showthat these visual landmarks are robustly matched, robot pose is es-timated and a consistent three-dimensional map is built. As imagefeatures are not noise-free, we carry out error analysis for the land-mark positions and the robot pose. We use Kalman filters to trackthese landmarks in a dynamic environment, resulting in a databasemap with landmark positional uncertainty.KEY WORDS—localization, mapping, visual landmarks,mobile robot1. IntroductionMobile robot localization and mapping, the process of simul-taneously tracking the position of a mobile robot relative to itsenvironment and building a map of the environment, has beenThe International Journal of Robotics ResearchVol. 21, No. 8, August 2002, pp. 735-758,©2002 Sage Publicationsa central research topic in mobile robotics. Accurate localiza-tion is a prerequisite for building a good map, and havingan accurate map is essential for good localization. Therefore,simultaneous localization and map building (SLAM) is a crit-ical underlying factor for successful mobile robot navigationin a large environment, irrespective of what the high-levelgoals or applications are.To achieve SLAM, there are different types of sensormodalities such as sonar, laser range finders and vision. Sonaris fast and cheap but usually very crude, whereas a laser scan-ning system is active, accurate but slow. Vision systems arepassive and of high resolution. Many early successful ap-proaches (Borenstein et al. 1996) utilize artificial landmarks,such as bar-code reflectors, ultrasonic beacons, visual pat-terns, etc., and therefore do not function properly in beacon-free environments. Therefore, vision-based approaches us-ing stable natural landmarks in unmodified environments arehighly desirable for a wide range of applications. The mapbuilt from these natural landmarks will serve as the basis forperforming high-level tasks such as mobile robot navigation.1.1. Literature ReviewHarris’s three-dimensional (3D) vision system DROID (Har-ris 1992) uses the visual motion of image corner features for3D reconstruction. Kalman filters are used for tracking fea-tures, and from the locations of the tracked image features,DROID determines both the camera motion and the 3D posi-tions of the features. Ego-motion determination by match-ing image features is generally very accurate in the shortto medium term. However, in a long image sequence, long-term drifts can occur as no map is created. In the DROIDsystem where monocular image sequences are used without735736 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH / August 2002odometry, the ego-motion and the perceived 3D structure canbe self-consistently in error. It is an incremental algorithm andruns at near real-time.Thrun et al. (1998) proposed a probabilistic approach us-ing the Expectation–Maximization (EM) algorithm. The E-step estimates robot locations at various points based on thecurrently best available map and the M-step estimates a max-imum likelihood map based on the locations computed in theE-step. The EM algorithm searches for the most likely mapby simultaneously considering the locations of all past sonarscans. Being a batch algorithm, it is not incremental and can-not be run in real-time.Thrun et al. (2000) proposed a real-time algorithm com-bining the strengths of EM algorithms and incremental algo-rithms. Their approach computes the full posterior probabilityover robot poses to determine the most likely pose, instead ofjust using the most recent laser scan as in incremental map-ping. The mapping is achieved in two dimensions using aforward-looking laser, and an upward-pointed laser is used tobuild a 3D map of the environment. However, it does not scaleto large environments as the calculation cost of the posteriorprobability is too expensive.The Monte Carlo localization method was proposed in Del-laert et al. (1999) based on the CONDENSATION algorithm.This vision-based Bayesian filtering method uses a sampling-based density representation and can represent multi-modalprobability distributions. Given a visual map of the ceilingobtained by mosaicing, it localizes the robot using a scalarbrightness measurement. Jensfelt et al. (2000) proposed somemodifications to this algorithm for better efficiency in largesymmetric environments. CONDENSATION is not suitablefor SLAM due to scaling problems and hence it is only usedfor localization.In SLAM, as the robot pose is being tracked continuously,multi-modal representations are not needed. Grid-based rep-resentation is problematic for SLAM because maintaining allgrid positions over an entire region is expensive and grids aredifficult to match.Using global registration and correlation techniques, Gut-mann and Konolige (1999) proposed a method to reconstructconsistent global maps from laser range data reliably. Theirpose estimation is achieved by scan matching of dense two-dimensional (2D) data and is not applicable to sparse 3D datafrom vision.Sim and Dudek (1999) proposed learning natural visualfeatures for pose estimation. Landmark matching is achievedusing principal components analysis. A tracked landmark isa set of image thumbnails detected in the learning phase, foreach grid position in pose space. It does not build any map forthe environment.In SLAM, a robot starts at an unknown location


View Full Document

MIT 16 412J - Mobile Robot Localization and Mapping

Documents in this Course
Load more
Download Mobile Robot Localization and Mapping
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Mobile Robot Localization and Mapping and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Mobile Robot Localization and Mapping 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?