DOC PREVIEW
MIT 16 412J - Extending SLAM to Multiple Robots

This preview shows page 1-2-3-4-5 out of 15 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 15 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Extending SLAM to Multiple RobotsEthan Howe and Jennifer NovosadMarch 13, 20051 IntroductionAs robots become more prevalent and intelligent, we will want them to share their experiences with otherrobots. Also in time sensitive applications such as search and rescue, multiple robots colla borating to achievethe goal can be much faster and mo re efficient. Smaller les s complicated robots can also be less expensiveand more exp endable while still achieving the same tasks as a more agile, larger robot. On a Mars mission,a number of mapping r obots could be deployed to identify important science sites and communicate back toa robot with scie ntific instruments where it believes its relative position to be. Thus some of the mappingrobots could fail and the mission goals would still be achieved. The list g oe s on but obviously robot-robotinteraction can be as important as robot-human interaction.In our project we implemented the basic SLAM algorithm, enabled robots to shar e SLAM information,and got them to e xecute a basic task collaboratively. We created a simulated world with certain assumptionswhich with a little extra work could b e translated to the real world. Much of our time during this projectwas spent deriving the e ntries in the SLAM covariance matrix which we could not find explicitly in theliterature. Once two robots were able to see each other and initialize contact, they were able to communicatenewly obtained information as long as they remained in radio contact. Our final robots were programmed todetermine the areas of a space they and other robots had already mapped and to greedily s e e k out unmapp edsections.2 TheoryIn this section, we will outline the theory that we used to build our implementation of multi-agent SLAM.SLAM creates a map of landmarks relative to some basis that is internal to the robot. When the robotwishes to move, it applies an internal model of that action on its current state and then checks the changesthis action made to its observations against what it expected. For multi-agent SLAM, a robot must use itsmeasurement of another robot and their current be lieved position to transform and add the other robot’smap to its own. Multi-agent systems are an active area of research involving many different strategies tofind an optimal solution in s pite of each participant only having information on a small part of the world.2.1 Summary Of SLAMSLAM, Simultaneous Localization And Mapping, is a technique that allows r obots to s imultaneously cr e ate amap of the world, and localize themselves on that map, in the presence of both measurement and movementnoise. The basic concept behind slam is a loop, which uses system models to predict the state, and thencorrects its predictions with measurements.In my no tation, f ookwill repre sent the variable foo on the kthiteration of the loop.ˆfoo is the robot’sinternal estimate of foo.ˆfoo−is the robot’s prediction of foo, before measurement.The vector x stores all of the state information, including the robot’s position and angle , and the positionsof all other objects. Basic SLAM assumes still objects, and so the information a bout moving objects, suchas other robot’s positions, is not stored in x.1The matrix A, and the vector B, are the sta te update eq uation. There is a different A and B for everyrobot movement command (such as turn, versus drive forward). The system upda tes with:xk= Axk−1+ B + wkwhere the vector wk= N(0, Q) is zero mean gaussian white noise, associated with movement. If themovement isn’t linear in the state variables, A and B are linearizations or taylor expansions of the movement.For example, if xrand ψrare the robot position and direction, driving forward would cause xrk = xr(k−1)+l cos(ψr(k−1)), which is not linear in ψ. Also note that the robot can face a variety of directions, so you donot want to make a small angle approximation in the taylor series expans ion – hence, A and B will have afunctional dependance on ψ. Since A and B may depend on the current values of the state, they may needto be recomputed at each time step.After movement and before measurement, the robot’s b e st guess of its curr e nt state is:ˆx−k= Aˆx−k−1+ BAfter the robot updates its internal gues s of where it is located, it takes a measurement, z. Like the modelfor movement, the robot has a model for what happens when a measurement is taken.zk= Hxk+ vkIn this model, H ex presses how the result of a measurement is linearly dependant on the state variables, andvk= N (0, R) is zero mean gaussia n white noise. Using this model, the robot can also guess what it shouldhave measured,ˆzk= H ˆxk−The difference between what it measured and what it expected to measure is propo rtional to how much itchange it needs to correct it’s measurements.xk=ˆx−k+ Kk(zk− Hˆx−k)Kk, the constant of propor tionality in the above equation, is called the Kalman Gain. Ther e are plenty ofpapers online which will explain how to derive the Kalman Gain. For this reason, I will simply present theresult here:Kk= P−kHT(HP−kHT+ R)−1In this equation, P is a matrix of the uncertainty covariances betwe e n all of the state variables (so, P[0,0]= σx[0],x[0]and P[i,j] = σx[i],x[j]). R is a matrix of the measurement noise covariance (vk= N (0, R)).Recalling that Q is the matrix of movement error covariance s, to maintain P at each time step, we compute:P−k= APk−1AT+ Q after movementPk= (I − KkH)P−kafter measurementTo summarize,Move: xk= Axk−1+ B + wkEstimate:ˆx−k= Aˆx−k−1+ B P−k= APk−1AT+ QMeasure: z = Hx + vkUpdate state: xk=ˆx−k+ Kk(zk− Hˆx−k) Pk= (I − KkH)P−kUsing Kalman gain is not the only way to update the state estimate. Another common example is particlefiltering, which stores a population of possible states. States which become unlikely are deleted, and r eplacedwith more likely possibilties. For the rest of this paper, we when we refer to SLAM, we will mean SLAM thatuses Kalman filters, rather than so me other implementation. The main concept should still work; however,some adjustments will have to be made before they will apply to other forms of SLAM.2Figure 1: The circle with a line through it is the robot, the line denoting where it is heading. The dottedcircles are that ro bot’s uncertainty in its position. At this point, it hasn’t seen any objects.1 23Figure 2: The robot sees a new object. The total uncertainty on the position estimate of


View Full Document

MIT 16 412J - Extending SLAM to Multiple Robots

Documents in this Course
Load more
Download Extending SLAM to Multiple Robots
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Extending SLAM to Multiple Robots and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Extending SLAM to Multiple Robots 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?