DOC PREVIEW
CMU CS 15381 - Planning in Dynamic Environments

This preview shows page 1-2-14-15-30-31 out of 31 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 31 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Planning inDynamic EnvironmentsMaxim LikhachevRobotics InstituteCarnegie Mellon University2Autonomous Agents in Dynamic EnvironmentsATRV robotsegbot robot3D map2D map3Search-based Planningrobot’s knowledge of the worlddiscretizeplanning map4Search-based PlanningdiscretizeS1S2S3S4S5S6S1S2S3S4S5S6convert into a graphsearch the graph for a least-cost path from sstartto sgoalrobot’s knowledge of the world planning map5Search-based PlanningdiscretizeS1S2S3S4S5S6S1S2S3S4S5S6convert into a graphsearch the graph for a least-cost path from sstartto sgoaleight-connected gridrobot’s knowledge of the world planning map6Search-based PlanningdiscretizeS1S2S3S4S5S6S1S2S3S4S5S6convert into a graphsearch the graph for a least-cost path from sstartto sgoaleight-connected gridwhy?robot’s knowledge of the world planning map7High Dimensional Search-based Planning2D (x, y) planning54K states4D (x, y, Ө, V) planningover 20 million states8High Dimensional Search-based Planning2D (x, y) planning54K states4D (x, y, Ө, V) planningover 20 million statesfast planningslow executionslow planningfast executionwhy?9High Dimensional Search-based Planning6 DOF robot arm> 3*109states20 DOF robot arm> 1026states10Planning in Real World • need to re-plan often due to– changes in the environment• navigation with people around• autonomous car driving with other cars on the road– inaccuracy in the model of the environment– errors in the position estimate11Planning in Real World • need to re-plan often due to– changes in the environment• navigation with people around• autonomous car driving with other cars on the road– inaccuracy in the model of the environment– errors in the position estimate• need to re-plan fast!12Planning in Real World • need to re-plan often due to– changes in the environment• navigation with people around• autonomous car driving with other cars on the road– inaccuracy in the model of the environment– errors in the position estimate• need to re-plan fast!4D planning with Anytime D* (Anytime Dynamic A*)13Planning in Real World • need to re-plan often due to– changes in the environment• navigation with people around• autonomous car driving with other cars on the road– inaccuracy in the model of the environment– errors in the position estimate• need to re-plan fast!4D planning with Anytime D* (Anytime Dynamic A*)14Planning in Real World • need to re-plan often due to– changes in the environment• navigation with people around• autonomous car driving with other cars on the road– inaccuracy in the model of the environment– errors in the position estimate• need to re-plan fast!3D parking planning with Anytime D* (Anytime Dynamic A*)for the next DARPA Grand Challenge15• Replanning algorithms (e.g., D* and D* Lite – incremental versions of A*)– speed up the task of repeated planning by reusing previous planning efforts– well-suited for dynamic and/or partially known environments• Anytime replanning algorithms (e.g., Anytime D* - anytime incremental A*)– combine the benefits of the two• Anytime planning algorithms (e.g., ARA*- anytime version of A*)– find first possibly highly-suboptimal solution quickly, use the remaining time to improve it– allow to meet time constraintsPlanning in Real World 16• Compute optimal g-values for relevant states– g(s) – an estimate of the cost of a least-cost path from sstartto s– optimal values satisfy: g(s) = mins’’∈pred(s)g(s’’) + c(s’’,s)Search for a Least-cost PathS2S1Sgoal2g=1g=3g=52S4S33g=2 g=51Sstart11g=0the cost c(s1,sgoal) ofan edge from s1to sgoal17• Compute optimal g-values for relevant states– g(s) – an estimate of the cost of a least-cost path from sstartto s– optimal values satisfy: g(s) = mins’’∈pred(s)g(s’’) + c(s’’,s)Search for a Least-cost PathS2S1Sgoal2g=1g=3g=52S4S33g=2 g=51Sstart11g=0the cost c(s1,sgoal) ofan edge from s1to sgoalwhy?18• Least-cost path is a greedy path computed by backtracking:– start with sgoaland from any state s move to the predecessor state s’ such that )),''()''((minarg')(''sscsgsspreds+=∈Search for a Least-cost PathS2S1Sgoal2g=1g=3g=52S4S33g=2 g=51Sstart11g=019• Computes optimal g-values for relevant statesA* Searchh(s)g(s)SstartSS2S1Sgoal………the cost of a shortest path from sstartto s found so faran (under) estimate of the cost of a shortest path from s to sgoal• At any point of time:20• Computes optimal g-values for relevant statesA* SearchComputePath functionwhile(sgoalis not expanded)remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;expand s;Main functiong(sstart) = 0; all other g-values are infinite; OPEN = {sstart};ComputePath();publish solution;S2S1Sgoal2g=∞h=2g= ∞h=1g= ∞h=02S4S33g= ∞h=2g= ∞h=11Sstart11g=0h=3set of candidates for expansionfor every expanded state g(s) is optimal (if heuristics are consistent)21• Computes optimal g-values for relevant statesA* SearchComputePath functionwhile(sgoalis not expanded)remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;expand s;S2S1Sgoal2g=∞h=2g= ∞h=1g= ∞h=02S4S33g= ∞h=2g= ∞h=11Sstart11g=0h=322• Computes optimal g-values for relevant statesA* SearchComputePath functionwhile(sgoalis not expanded)remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;insert s into CLOSED;for every successor s’ of s such that s’ not in CLOSEDif g(s’) > g(s) + c(s,s’)g(s’) = g(s) + c(s,s’);insert s’ into OPEN;S2S1Sgoal2g=∞h=2g= ∞h=1g= ∞h=02S4S33g= ∞h=2g= ∞h=11Sstart11g=0h=3set of states that have already been expandedtries to decrease g(s’) using the found path from sstartto s23• Computes optimal g-values for relevant statesA* Search: ExampleComputePath functionwhile(sgoalis not expanded)remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;insert s into CLOSED;for every successor s’ of s such that s’ not in CLOSEDif g(s’) > g(s) + c(s,s’)g(s’) = g(s) + c(s,s’);insert s’ into OPEN;CLOSED = {}OPEN = {sstart}next state to expand: sstartS2S1Sgoal2g=∞h=2g= ∞h=1g= ∞h=02S4S33g= ∞h=2g= ∞h=11Sstart11g=0h=324• Computes optimal g-values for relevant statesA* Search: ExampleComputePath functionwhile(sgoalis not expanded)remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;insert s into CLOSED;for every successor s’ of s such that s’ not in CLOSEDif g(s’) > g(s) + c(s,s’)g(s’) = g(s) + c(s,s’);insert s’ into OPEN;CLOSED = {}OPEN = {sstart}next


View Full Document

CMU CS 15381 - Planning in Dynamic Environments

Documents in this Course
Planning

Planning

19 pages

Planning

Planning

19 pages

Lecture

Lecture

42 pages

Lecture

Lecture

27 pages

Lecture

Lecture

19 pages

FOL

FOL

41 pages

lecture

lecture

34 pages

Exam

Exam

7 pages

Lecture

Lecture

22 pages

Handout

Handout

11 pages

Midterm

Midterm

14 pages

lecture

lecture

83 pages

Handouts

Handouts

38 pages

mdp

mdp

37 pages

HW2

HW2

7 pages

nn

nn

25 pages

lecture

lecture

13 pages

Handout

Handout

5 pages

Lecture

Lecture

27 pages

Lecture

Lecture

62 pages

Lecture

Lecture

5 pages

Load more
Download Planning in Dynamic Environments
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Planning in Dynamic Environments and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Planning in Dynamic Environments 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?