Stanford CS 326A - Probabilistic Roadmaps

Unformatted text preview:

Probabilistic RoadmapsThe complexity of the robot’s free space is overwhelmingSlide 3Initial idea: Potential Field + Random WalkBut many pathological cases …Illustration of a Bad Potential “Landscape”Probabilistic Roadmap (PRM)Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Slide 14Slide 15Slide 16Multi- vs. Single-Query PRMsProcedure BasicPRM(s,g,N)Requirements of PRM PlanningPRM planners work well in practice Why?PRM planners work well in practice. Why?Why is PRM planning probabilistic?So ...Relation to Monte Carlo IntegrationSlide 25Visibility in Fε-Goodness of FConnectivity IssueSlide 29Slide 30Slide 31CommentsSlide 33Theoretical Convergence of PRM PlanningSlide 35Slide 36What does the empirical success of PRM planning tell us?In retrospect, is this property surprising?Most narrow passages in F are intentional …Slide 40How important is the probabilistic sampling measure π?ImpactSlide 43How important is the randomness of the sampling source?Choice of the Source SConclusion1Probabilistic RoadmapsCS 326A: Motion Planning2The complexity of the robot’s free space is overwhelming3 The cost of computing an exact representation of the configuration space of a multi-joint articulated object is often prohibitive But very fast algorithms exist that can check if an articulated object at a given configuration collides with obstacles  Basic idea of Probabilistic Roadmaps (PRMs): Compute a very simplified representation of the free space by sampling configurations at random using some probability measureInitial idea: Potential Field + Random Walk Attract some points toward their goal Repulse other points by obstacles Use collision check to test collision Escape local minima by performing random walksBut many pathological cases …Illustration of a Bad Potential “Landscape”UqGlobal minimum7Probabilistic Roadmap (PRM)Free/feasible spaceSpace nforbidden space8Probabilistic Roadmap (PRM)Configurations are sampled by picking coordinates at random9Probabilistic Roadmap (PRM)Configurations are sampled by picking coordinates at random10Probabilistic Roadmap (PRM)Sampled configurations are tested for collision11Probabilistic Roadmap (PRM)The collision-free configurations are retained as milestones12Probabilistic Roadmap (PRM)Each milestone is linked by straight paths to its nearest neighbors13Probabilistic Roadmap (PRM)Each milestone is linked by straight paths to its nearest neighbors14Probabilistic Roadmap (PRM)The collision-free links are retained as local paths to form the PRM15Probabilistic Roadmap (PRM)sgThe start and goal configurations are included as milestones16Probabilistic Roadmap (PRM)The PRM is searched for a path from s to gsg17Multi- vs. Single-Query PRMsMulti-query roadmaps Pre-compute roadmap Re-use roadmap for answering queriesSingle-query roadmaps Compute a roadmap from scratch for each new query18This answer may occasionally be incorrectSampling strategyProcedure BasicPRM(s,g,N)1. Initialize the roadmap R with two nodes, s and g2. Repeat:a. Sample a configuration q from C with probability measure b. If q  F then add q as a new node of Rc. For some nodes v in R such that v  q doIf path(q,v)  F then add (q,v) as a new edge of Runtil s and g are in the same connected component of R or R contains N+2 nodes3. If s and g are in the same connected component of R thenReturn a path between them4. ElseReturn NoPathRequirements of PRM Planning1. Checking sampled configurations and connections between samples for collision can be done efficiently.  Hierarchical collision detection2. A relatively small number of milestones and local paths are sufficient to capture the connectivity of the free space.  Non-uniform sampling strategies20PRM planners work well in practice Why?21PRM planners work well in practice. Why?Why are they probabilistic?What does their success tell us?How important is the probabilistic sampling measure ?How important is the randomness of the sampling source?Why is PRM planning probabilistic?A PRM planner ignores the exact shape of F. So, it acts like a robot building a map of an unknown environment with limited sensorsAt any moment, there exists an implicit distribution (H,s), where •H is the set of all consistent hypotheses over the shapes of F •For every x  H, s(x) is the probability that x is correctThe probabilistic sampling measure  reflects this uncertainty. [Its goal is to minimize the expected number of remaining iterations to connect s and g, whenever they lie in the same component of F.]So ...PRM planning trades the cost of computing F exactly against the cost of dealing with uncertainty This choice is beneficial only if a small roadmap has high probability to represent F well enough to answer planning queries correctly[Note the analogy with PAC learning]Under which conditions is this the case?Relation to Monte Carlo Integrationxf(x)�21xxI = f (x)dxabA = a × bx1x2(xi,yi)Ablack#red#red#I Relation to Monte Carlo Integrationxf(x)�21xxI = f (x)dxabA = a × bx1x2(xi,yi)Ablack#red#red#I But a PRM planner must construct a pathThe connectivity of F may depend on small regions Insufficient sampling of such regions may lead the planner to failureTwo configurations q and q’ see each other if path(q,q’)  FThe visibility set of q is V(q) = {q’ | path(q,q’)  F}Visibility in Fε-Goodness of FLet μ(X) stand for the volume of X  FGiven ε  (0,1], q  F is ε-good if it sees at least an ε-fraction of F, i.e., if μ(V(q))  εμ(F) F is ε-good if every q in F is ε-goodIntuition: If F is ε-good, then with high probability a small set of configurations sampled at random will see most of FqFV(q)Here, ε ≈ 0.18F1F2Connectivity IssueF1F2Connectivity IssueLookout of F1F1F2Connectivity IssueLookout of F1The β-lookout of a subset F1 of F is the set of all configurations in F1 that see a β-fraction of F2 = F\ F1β-lookout(F1) = {q  F1 | μ(V(q)F2)  βμ(F2)}F1F2Connectivity IssueLookout of F1The β-lookout of a subset F1 of F is the set of all configurations in F1 that see a β-fraction of F2 = F\ F1β-lookout(F1) = {q  F1 | μ(V(q)F2)  βμ(F2)} F is (ε,α,β)-expansive if it is ε-good and each one of its subsets X has a β-lookout whose volume is at least aμ(X)Intuition: If F is favorably expansive, it should be relatively easy to capture its


View Full Document

Stanford CS 326A - Probabilistic Roadmaps

Download Probabilistic Roadmaps
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Probabilistic Roadmaps and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Probabilistic Roadmaps 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?