DOC PREVIEW
MIT 16 412J - Final Project Report Cognitive Robotics

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Robert Effinger Final Project Report Cognitive Robotics 16.412J - Spring 2004 May 13, 2004Table of Contents 1.) Table of Contents ……………………………………………………………….. 2 2.) Overall Description ……………………………………………………………….3 a.) Brief Group Project Description b.) Individual Accomplishments 3.) Create Framework for TPN to be solved as a Conditional CSP ………………5 a.) Definition of a Conditional CSP b.) Implementation Example - Two Functions i.) getNextDecisionStartNodes( ) ii.) deactivateNestedDecisionStartNodes( ) c.) Pseudo-code and function explanations 4.) Dynamic Backtracking Implementation within Kirk …………………………..8 a.) Overall Description of Dynamic Backtracking b.) Pseudo-code c.) Insights from Testing Dynamic Backtracking on Representative TPNs …. 11 5.) Interfaces with Generative Planner and Roadmap Path Planner …………….12 a.) Roadmap Path Planner i.) A Macro to update lower activity time-bounds b.) Generative Planner ii.) Passing TPN structure by pointer 6.) Branch and Bound w/ Dynamic Backtracking for Anytime and Optimal Search 7.) Summary and Conclusions …………………………………………………….. 16 22.) Overall Description 2.a) Brief Group Project Description For the Final Group Project in 16.412J, myself, Dan Leaute, Seung Chung, and Dan Lovell developed an autonomous cooperative UAV demonstration using the Cloud-Cap Autopilot Airplane Simulator. Our group devised a scenario and map in-which a team of UAVs can cooperatively plan to extinguish forest fires. The goal of this project is to create a cooperative multi-UAV activity planner that only needs as input high-level mission goals and specifications(such as put-out-fire or image-fire-damage). The planner should then be able to decompose the high-level mission goals into lower-level goals, and motion commands, and then execute them autonomously. Allowing the operator to specify high-level mission goals instead of sending detailed sequential commands to the UAVs reduces planning time and operator error, and increases plan flexibility. NF Z1 NFZ2 WaterAWaterA WaterB WaterB yLegend: Water Plan Goal: Extinguish All Fires Vehicles: One Water UAV (to extinguish fires) Resources: Fuel & Water Fire Fire Seeker UAV Water UAV No-Fl Zone Fire UAV Base UAV Base Two Seeker UAVs (to image the fires) Figure 1: Autonomous Firefighting Cooperative UAV Scenario To accomplish the goals set out above, the group has devised an integrated planner architecture that leverages the strengths of four separate planners. The four planners are Kirk, a strategic high-level mission planner, PDDL, a generative activity planner, dStarLite, a roadmap path-planner, and a MILP-based kinodynamic receding horizon path-planner. Kirk allows the operator to specify goals at an abstract level, in RMPL, The Reactive Model-Based Programming Language. The generative planner can then expand the high-level goals into lower-level activities. Then, the MILP-based receding horizon kinodynamic path planner generates an optimal motion plan, including obstacle avoidance. The dStarLite road-map path planner is called by all three afore-mentioned planners to get shortest-path distance to goal measurements and estimates. 32.b) Individual Accomplishments My individual contributions are three-fold: 1.) Created a framework within Kirk to solve a TPN as a conditional CSP. 2.) Implemented Dynamic Backtracking within Kirk (to make the search more efficient) 3.) Interface with the Generative and Roadmap Path Planners 43.) Create Framework for TPN to be solved as a Conditional CSP The definition of a classic CSP (not a conditional CSP) is as follows: Constraint Satisfaction Problem (I,V,C) --II,,aasseettooffvvaarriiaabbllees s--VVii,,aasseettooffppoossssiibblleevvaalluueessffoorreeaacchhvvaarriiaabblleeiinnII. .--CC,,aasseettooffCCiijj ccoonnssttrraaiinnttss,,eeaacchhaabbiinnaarryyrreellaattiioon nC = {C1,1 … C1,n C2,1 … C2,n … Cn,n} Figure 2: Definition of a classic CSP The solution to a classic CSP is found when each variable I is assigned a value from its domain Vi and the set of all Constraints {C} is satisfied. In a conditional CSP, however, the set of variables {I} that need to be assigned values may grow or shrink in size depending on the values already assigned to variables in the partial solution, {P}. In the context of a TPN, this occurs when there are nested decision nodes. A decision node is considered to be nested if it’s inarc belongs to a path that emanates from the outarc of another decision node. A nested decision node only needs to be assigned a value when the “parent” decision node, or “enabling” decision node has a particular outarc selected as it’s choice. For all other choices of outarcs that the “parent” decision node can take, the “nested” decision node does not need to be considered as a part of the problem. To frame the TPN search process as a conditional CSP, two functions were added to the TPN data structure, and a LIFO queue was added to keep track of which decision nodes need to be assigned values before the search is complete. The two functions are: 1.) getNextDecisionStartNodes() 2.) deactivateNestedDecisionStartNodes() 1.) getNextDecisionStartNodes() The function getNextDecisionStartNodes() recursively “walks” the TPN and returns the first set (actually returns a map, not a set) of decision start node(s) that it encounters. (The function can return more than one decision start node if there are parallel branches in the TPN) 56 The search is initiated by calling getNextDecisionStartNodes(startNode) on the startNode of the TPN. Any decision start nodes that are found are pushed onto the LIFO queue. The search process begins by trying to assign a consistent choice for the top decision start node on the LIFO queue. If a consistent value for the first decision start node is found, it is popped from the queue, and getNextDecisionStartNodes() is called with the first decision start node as the argument. Any decision node(s) that are returned from the function are pushed onto the LIFO queue, and the process is repeated until the LIFO queue is emptied. An empty LIFO queue means that the search is complete and a full and consistent assignment is made to the TPN. This process is illustrated in Figure 3, Part A.


View Full Document

MIT 16 412J - Final Project Report Cognitive Robotics

Documents in this Course
Load more
Download Final Project Report Cognitive Robotics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Final Project Report Cognitive Robotics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Final Project Report Cognitive Robotics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?