DOC PREVIEW
UMD CMSC 421 - Planning

This preview shows page 1-2-20-21 out of 21 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 21 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1PlanningPlanningRussell and Norvig: Chapter 11CMSC421 – Fall 2003based on material from Jim Blythe, JC Latombe,Marie desJardins and Daphne KollerPlanning AgentPlanning Agentenvironmentagent?sensorsactuatorsA1 A2 A3Planning problemFind a sequence of actions that achieves a given goal when executed from a given initial world state. That is, given  a set of operator descriptions (defining the possible primitive actions by the agent),  an initial state description, and  a goal state description or predicate, compute a plan, which is  a sequence of operator instances, such that executing them in the initial state will change the world to a state satisfyingthe goal-state description. Goals are usually specified as a conjunction of subgoals to be achievedPlanning vs. problem solvingPlanning and problem solving methods can often solve the same sorts of problemsPlanning is more powerful because of the representations and methods usedStates, goals, and actions are decomposed into sets of sentences (usually in first-order logic)Search often proceeds through plan spacerather than state space(though there are also state-space planners)Subgoals can be planned independently, reducing the complexity of the planning problem2Choose actions to achieve a certain goalBut isn’t it exactly the same goal as for problem solving?Some difficulties with problem solving: The successor function is a black box: it must be “applied” to a state to know which actions are possible in that state and what are the effects of each oneGoal of PlanningOtherwise, in the real world an agent would be overwhelmed by irrelevant actions• Suppose that the goal is HAVE(MILK). • From some initial state where HAVE(MILK) is not satisfied, the successor function must be repeatedly applied to eventually generate a state where HAVE(MILK) is satisfied. • An explicit representation of the possible actions and their effects would help the problem solver select the relevant actions Goal of PlanningChoose actions to achieve a certain goalBut isn’t it exactly the same goal as for problem solving?Some difficulties with problem solving: The goal test is another black-box function, states are domain-specific data structures, and heuristics must be supplied for each new problemSuppose that the goal is HAVE(MILK)∧HAVE(BOOK)Without an explicit representation of the goal, theproblem solver cannot know that a state whereHAVE(MILK) is already achieved is more promisingthan a state where neither HAVE(MILK) norHAVE(BOOK) is achieved Goal of PlanningChoose actions to achieve a certain goalBut isn’t it exactly the same goal as for problem solving?Some difficulties with problem solving: The goal may consist of several nearly independent subgoals, but there is no way for the problem solver to know itHAVE(MILK) and HAVE(BOOK) may be achieved bytwo nearly independent sequences of actionsRepresentations in PlanningRepresentations in PlanningPlanning opens up the black-boxesby using logic to represent: Actions States GoalsProblem solving Logic representationPlanning3Major approachesSituation calculusState space planningPartial order planningPlanning graphsHierarchical decomposition (HTN planning)Reactive planningPlanning rapidly changing subfield of AI In biannual competition at AI Planning Systems Conference:•four years ago, best planner did plan space search using SAT solver•three years ago, the best planner did regression search•last year, best planner did forward state space search with an inadmissable heuristic functioncmsc722: Planning, taught by Prof. NauSituation Calculus PlanningFormulate planning problem in FOLUse theorem prover to find proof (akaplan)Representing changeRepresenting change in the world in logic can be tricky.One way is just to change the KB Add and delete sentences from the KB to reflect changes How do we remember the past, or reason about changes?Situation calculus is another wayA situation is a snapshot of the world at some instant in timeWhen the agent performs an action A in situation S1, the result is a new situation S2.Situations4Situation calculusA situation is a snapshot of the world at an interval of time during which nothing changes Every true or false statement is made with respect to a particular situation.  Add situation variables to every predicate. at(hunter,1,1) becomes at(hunter,1,1,s0): at(hunter,1,1) is true in situation (i.e., state) s0.Add a new function, result(a,s), that maps a situation s into a new situation as a result of performing action a. For example, result(forward, s) is a function that returns the successor state (situation) to s Example: The action agent-walks-to-location-y could be represented by (∀x)(∀y)(∀s) (at(Agent,x,s) ^ ~onbox(s)) -> at(Agent,y,result(walk(y),s)) Situation calculus planningInitial state: a logical sentence about (situation) S0At(Home, S0) ^ ~Have(Milk, S0) ^ ~ Have(Bananas, S0) ^ ~Have(Drill, S0)Goal state: (∃s) At(Home,s) ^ Have(Milk,s) ^ Have(Bananas,s) ^ Have(Drill,s)Operators are descriptions of actions: ∀(a,s) Have(Milk,Result(a,s)) <=> ((a=Buy(Milk) ^ At(Grocery,s)) ∨(Have(Milk, s) ^ a~=Drop(Milk)))Result(a,s) names the situation resulting from executing action a in situation s.Action sequences are also useful: Result'(l,s) is the result of executing the list of actions (l) starting in s:(∀s) Result'([],s) = s(∀a,p,s) Result'([a|p]s) = Result'(p,Result(a,s))Situation calculus planning IIA solution is thus a plan that when applied to the initial state yields a situation satisfying the goal query: At(Home,Result'(p,S0)) ^ Have(Milk,Result'(p,S0))^ Have(Bananas,Result'(p,S0))^ Have(Drill,Result'(p,S0))Thus we would expect a plan (i.e., variable assignment through unification) such as: p = [Go(Grocery), Buy(Milk), Buy(Bananas), Go(HardwareStore), Buy(Drill), Go(Home)]SC planning: analysisThis is fine in theory, but remember that problem solving (search) is exponential in the worst caseAlso, resolution theorem proving only finds aproof (plan), not necessarily a good planAnother important issue: the Frame Problem5The Frame ProblemIn SC, need not only axioms to describe what changes in each situation, but also need axioms to describe what stays the same (can do this using successor-state axioms)Qualification problem: difficulty in specifying all the conditions that must hold in order for an action to workRamification problem: difficulty in specifying all of the effects that will hold


View Full Document

UMD CMSC 421 - Planning

Download Planning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Planning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Planning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?