Informed search algorithmsOutlineBest-first searchRomania with straight-line dist.Greedy best-first searchGreedy best-first search exampleSlide 7Slide 8Slide 9Properties of greedy best-first searchSlide 11A* searchAdmissible heuristicsSlide 14Slide 15DominanceRelaxed problemsConsistent heuristicsA* search exampleSlide 20Slide 21Slide 22Slide 23Slide 24Properties of A*Optimality of A* (proof)Optimality of A*Slide 28Memory Bounded Heuristic Search: Recursive BFSRBFS:Simple Memory Bounded A*Local search algorithmsExample: n-queensHill-climbing searchGradient DescentExerciseHill-climbing search: 8-queens problemSlide 38Simulated annealing searchProperties of simulated annealing searchLocal beam searchGenetic algorithmsSlide 43Travelling Salesman ProblemSlide 45Slide 46Slide 47Slide 48AppendixSMA* pseudocode (not in 2nd edition 2 of book)Simple Memory-bounded A* (SMA*)Informed search algorithmsChapter 4OutlineBest-first searchGreedy best-first searchA* searchHeuristicsLocal search algorithmsHill-climbing searchSimulated annealing searchLocal beam searchGenetic algorithmsBest-first searchIdea: use an evaluation function f(n) for each nodef(n) provides an estimate for the total cost.Expand the node n with smallest f(n).Implementation:Order the nodes in fringe increasing order of cost.Special cases:greedy best-first searchA* searchRomania with straight-line dist.Greedy best-first searchf(n) = estimate of cost from n to goale.g., f(n) = straight-line distance from n to BucharestGreedy best-first search expands the node that appears to be closest to goal.Greedy best-first search exampleGreedy best-first search exampleGreedy best-first search exampleGreedy best-first search exampleProperties of greedy best-first searchComplete? No – can get stuck in loops.Time? O(bm), but a good heuristic can give dramatic improvementSpace? O(bm) - keeps all nodes in memoryOptimal? No e.g. AradSibiuRimnicu VireaPitestiBucharest is shorter!Think of an exampleProperties of greedy best-first searchgcbadstart stategoal statef(n) = straightline distanceA* searchIdea: avoid expanding paths that are already expensiveEvaluation function f(n) = g(n) + h(n)g(n) = cost so far to reach nh(n) = estimated cost from n to goalf(n) = estimated total cost of path through n to goalBest First search has f(n)=h(n)Uniform Cost search has f(n)=g(n)Admissible heuristicsA heuristic h(n) is admissible if for every node n,h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n.An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimisticExample: hSLD(n) (never overestimates the actual road distance)Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimalAdmissible heuristicsE.g., for the 8-puzzle:h1(n) = number of misplaced tilesh2(n) = total Manhattan distance(i.e., no. of squares from desired location of each tile)h1(S) = ? h2(S) = ?Admissible heuristicsE.g., for the 8-puzzle:h1(n) = number of misplaced tilesh2(n) = total Manhattan distance(i.e., no. of squares from desired location of each tile)h1(S) = ? 8h2(S) = ? 3+1+2+2+2+3+3+2 = 18DominanceIf h2(n) ≥ h1(n) for all n (both admissible)then h2 dominates h1 h2 is better for search: it is guaranteed to expand less or equal nr of nodes.Typical search costs (average number of nodes expanded):d=12 IDS = 3,644,035 nodesA*(h1) = 227 nodes A*(h2) = 73 nodes d=24 IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodesRelaxed problemsA problem with fewer restrictions on the actions is called a relaxed problemThe cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problemIf the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solutionIf the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solutionConsistent heuristicsA heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n')If h is consistent, we havef(n’) = g(n’) + h(n’) (by def.) = g(n) + c(n,a,n') + h(n’) (g(n’)=g(n)+c(n.a.n’)) ≥ g(n) + h(n) = f(n) (consistency)f(n’) ≥ f(n)i.e., f(n) is non-decreasing along any path.Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimalIt’s the triangleinequality !keeps all checked nodesin memory to avoid repeated statesA* search exampleA* search exampleA* search exampleA* search exampleA* search exampleA* search exampleProperties of A*Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) , i.e. step-cost > ε)Time/Space? Exponential except if: Optimal? YesOptimally Efficient: Yes (no algorithm with the same heuristic is guaranteed to expand fewer nodes)db* *| ( ) ( )| (log ( ))h n h n O h n- �Optimality of A* (proof)Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.f(G2) = g(G2) since h(G2) = 0 f(G) = g(G) since h(G) = 0 g(G2) > g(G) since G2 is suboptimal f(G2) > f(G) from above h(n) ≤ h*(n) since h is admissible (under-estimate)g(n) + h(n) ≤ g(n) + h*(n) from abovef(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G)f(n) < f(G2) fromWe want to prove:f(n) < f(G2)(then A* will prefer n over G2)Optimality of A*A* expands nodes in order of increasing f valueGradually adds "f-contours" of nodes Contour i contains all nodes with f≤fi where fi < fi+1SBA DECFG120234 86 11straight-line distancesh(S-G)=10h(A-G)=7h(D-G)=1h(F-G)=1h(B-G)=10h(E-G)=8h(C-G)=20The graph above shows the step-costs for different paths going from the start (S) to the goal (G). On the right you find the straight-line distances.1. Draw the search tree for this problem. Avoid repeated states.2. Give the order in which the tree is searched (e.g. S-C-B...-G) for A* search. Use the straight-line dist. as a heuristic function, i.e. h=SLD, and indicate for each node visited what the value for the evaluation function, f, is.try yourselfMemory Bounded Heuristic Search: Recursive BFSHow can we solve the memory problem for A* search?Idea: Try something like depth first search, but let’s not forget everything about the branches we have
View Full Document