DOC PREVIEW
UMBC CMCS 471 - Informed Search

This preview shows page 1-2-15-16-17-32-33 out of 33 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 33 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Informed Search Chapter 4Informed Methods Add Domain-Specific InformationHeuristicsBest First SearchGreedy SearchAlgorithm ASlide 7Algorithm A*Some Observations on A*Example search spaceExampleGreedy AlgorithmA* SearchProof of the Optimality of A*Iterative Deepening A* (IDA*)Automatic generation of h functionsSlide 17Slide 18Slide 19Slide 20Complexity of A* searchSlide 22Slide 23Iterative Improvement SearchHill Climbing on a Surface of StatesHill Climbing SearchHill climbing exampleDrawbacks of Hill-ClimbingSimulated AnnealingSlide 30Observations with SAGenetic AlgorithmsInformed Search SummaryInformedInformedSearchSearch Chapter 4Informed Methods Add Domain-Specific Information•Add domain-specific information to select what is the best path to continue searching along •Define a heuristic function, h(n), that estimates the "goodness" of a node n with respect to reaching a goal. •Specifically, h(n) = estimated cost (or distance) of minimal cost path from n to a goal state. •h(n) is about cost of the future search, g(n) past search•h(n) is an estimate (rule of thumb), based on domain-specific information that is computable from the current state description. Heuristics do not guarantee feasible solutions and are often without theoretical basis.Heuristics•Examples:–Missionaries and Cannibals: Number of people on starting river bank–8-puzzle: Number of tiles out of place (i.e., not in their goal positions)–8-puzzle: Sum of Manhattan distances each tile is from its goal position –8-queen: # of un-attacked positions – un-positioned queens•In general:–h(n) >= 0 for all nodes n –h(n) = 0 implies that n is a goal node –h(n) = infinity implies that n is a deadend from which a goal cannot be reachedBest First Search•Order nodes on the OPEN list by increasing value of an evaluation function, f(n) , that incorporates domain-specific information in some way. •Example of f(n): –f(n) = g(n) (uniform-cost) –f(n) = h(n) (greedy algorithm)–f(n) = g(n) + h(n) (algorithm A)•This is a generic way of referring to the class of informed methods.Greedy Search•Evaluation function f(n) = h(n), sorting open nodes by increasing values of f.•Selects node to expand believed to be closest (hence it's "greedy") to a goal node (i.e., smallest f = h value) •Not admissible, as in the example. Assuming all arc costs are 1, then Greedy search will find goal f, which has a solution cost of 5, while the optimal solution is the path to goal i with cost 3. •Not complete (if no duplicate check)agbcdefhih=2h=1h=1h=1h=0h=4h=1h=0Algorithm A•Use as an evaluation functionf(n) = g(n) + h(n)•The h(n) term represents a “depth-first“ factor in f(n) •g(n) = minimal cost path from the start state to state n generated so far•The g(n) term adds a "breadth-first" component to f(n).•Ranks nodes on OPEN list by estimated cost of solution from start node through the given node to goal. •Not complete if h(n) can equal infinity.•Not admissible SBADG1583815C194589f(D)=4+9=13f(B)=5+5=10f(C)=8+1=9C is chosen next to expand(h*(D)=2, h*(C)=2)Algorithm AOPEN := {S}; CLOSED := {};repeat Select node n from OPEN with minimal f(n) and place n on CLOSED;if n is a goal node exit with success;Expand(n);For each child n' of n doif n' is not already on OPEN or CLOSED thenput n’ on OPEN; set backpointer from n' to ncompute h(n'), g(n')=g(n)+ c(n,n'), f(n')=g(n')+h(n');else if n' is already on OPEN or CLOSED and if g(n') is lower for the new version of n' thendiscard the old version of n';Put n' on OPEN; set backpointer from n' to nuntil OPEN = {};exit with failureAlgorithm A*•Algorithm A with constraint that h(n) <= h*(n)•h*(n) = true cost of the minimal cost path from n to any goal. •g*(n) = true cost of the minimal cost path from S to n. •f*(n) = h*(n)+g*(n) = true cost of the minimal cost solution path from S to to any goal going through n.•h is admissible when h(n) <= h*(n) holds.•Using an admissible heuristic guarantees that the first solution found will be an optimal one. •A* is complete whenever the branching factor is finite, and every operator has a fixed positive cost (total # of nodes with f(.) <= f*(goal) is finite)•A* is admissibleSome Observations on A*•Null heuristic: If h(n) = 0 for all n, then this is an admissible heuristic and A* acts like Uniform-Cost Search.•Better heuristic: If h1(n) <= h2(n) <= h*(n) for all non-goal nodes, then h2 is a better heuristic than h1 –If A1* uses h1, and A2* uses h2, then every node expanded by A2* is also expanded by A1*. –In other words, A1 expands at least as many nodes as A2*. –We say that A2* is better informed than A1*. •The closer h is to h*, the fewer extra nodes that will be expanded •Perfect heuristic: If h(n) = h*(n) for all n, then only the nodes on the optimal solution path will be expanded. So, no extra work will be performed.Example search spaceSCBADGE1589453788430start stategoal statearc costh valueparent pointer014 8985g valueExamplen g(n) h(n) f(n) h*(n)S 0 8 8 9A 1 8 9 9B 5 4 9 4C 8 3 11 5D 4 inf inf infE 8 inf inf infG 9 0 9 0•h*(n) is the (hypothetical) perfect heuristic.•Since h(n) <= h*(n) for all n, h is admissible•Optimal path = S B G with cost 9.Greedy Algorithmf(n) = h(n) node exp. OPEN list {S(8)} S {C(3) B(4) A(8)} C {G(0) B(4) A(8)} G {B(4) A(8)}•Solution path found is S C G with cost 13.•3 nodes expanded. •Fast, but not optimal.A* Searchf(n) = g(n) + h(n) node exp. OPEN list {S(8)} S {A(9) B(9) C(11)} A {B(9) G(10) C(11) D(inf) E(inf)} B {G(9) G(10) C(11) D(inf) E(inf)} G {C(11) D(inf) E(inf)}•Solution path found is S B G with cost 9• 4 nodes expanded.•Still pretty fast. And optimal, too.Proof of the Optimality of A*•Let l* be the optimal solution path (from S to G), let f* be its cost•At any time during the search, one or more node on l* are in OPEN•We assume that A* has selected G2, a goal state with a suboptimal solution (g(G2) > f*). •We show that this is impossible. –Let node n be the shallowest OPEN node on l* –Because all ancestors of n along l* are expanded, g(n)=g*(n)–Because h(n) is admissible, h*(n)>=h(n). Then f* =g*(n)+h*(n) >= g*(n)+h(n) = g(n)+h(n)= f(n).–If


View Full Document

UMBC CMCS 471 - Informed Search

Download Informed Search
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Informed Search and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Informed Search 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?