Unformatted text preview:

Example: $n$-queens $n$-queens Informed Search StrategiesInformed Search StrategiesInformed Search StrategiesStandard Assumptions on Search SpacesBest-First SearchImplementing Best-first SearchBest-first Search StrategiesGreedy SearchGreedy Search ExampleProperties of Greedy SearchProperties of greedy searchProperties of Greedy SearchProperties of Greedy SearchA$^*$ SearchA$^*$ Search ExampleA* Search with Admissible HeuristicOptimality of A$^*$ (standard proof)Optimality of A$^*$ (more useful)Properties of A$^*$Properties of A$^*$Properties of A$^*$Properties of A$^*$Properties of A$^*$Admissible HeuristicsAdmissible HeuristicsDominanceOptimality/Completeness of A* SearchComplexity of A* SearchIDA* and SMA*Relaxed ProblemsLocal Search AlgorithmsLocal Search Example: TSPLocal Search Example: TSPLocal Search Example: $n$-queensLocal Search Example: $8$-queensHill-Climbing (or Gradient Descent)Performance of Hill-ClimbingSimulated AnnealingSimulated Annealing AlgorithmProperties of Simulated AnnealingA Typical Temperature FunctionTypical Performance of SATabu SearchParameters of Tabu SearchAdvantages of Tabu SearchDisadvantages of Tabu SearchA3 Informed SearchHantao Zhanghttp://www.cs.uiowa.edu/∼hzhang/c145The University of IowaDepartment of Computer ScienceArtificial Intelligence – p.1/49Example: n-queensPut n queens on an n × n board with no two queens on thesame row, column, or diagonalA search problem!State space: the board with 0 to n queensInitial state: the empty board or a board with n randomqueensGoal-test: No two queens attack each otherSuccessors: ?Artificial Intelligence – p.2/49n-queensHow to define the successors:Case 1: Consider one (fixed) cell at a timeCase 2: Consider one row at a timeCase 3: Consider one queen at a timecase1 case 2 case 3Branching factor: 2 n n2Maximal depth: n2n nState space: 2n2nnn2nArtificial Intelligence – p.3/49Informed Search StrategiesUninformed search strategies look for solutions bysystematically generating new states and checkingeach of them against the goal.This approach is very inefficient in most cases.Most successor states are “obviously” a bad choice.Such strategies do not know that because they haveminimal problem-specific knowledge.Informed search strategies exploit problem-specificknowledge as much as possible to drive the search.They are almost always more efficient than uninformedsearches and often also optimal.Artificial Intelligence – p.4/49Informed Search StrategiesMain IdeaUse the knowledge of the problem domain to build anevaluation function f.For every node n in the search space, f(n) quantifiesthe desirability of expanding n in order to reach thegoal.Then use the desirability value of the nodes in the fringeto decide which node to expand next.Artificial Intelligence – p.5/49Informed Search Strategiesf is typically an imperfect measure of the goodness ofthe node. The right choice of nodes is not always theone suggested by f.It is possible to build a perfect evaluation function,which will always suggest the right choice.How?Why don’t we use perfect evaluation functions then?Artificial Intelligence – p.6/49Standard Assumptions on Search SpacesThe cost of a node increases with the node’s depth.Transitions costs are non-negative and bounded below.That is, there is a δ > 0 such that the cost of eachtransition is ≥ δ.Each node has only finitely-many successors.Note:There are problems that do not satisfy one or more ofthese assumptions.Artificial Intelligence – p.7/49Best-First SearchIdea: use an evaluation function for each node toestimate of “desirability”Strategy: Always expand most desirable unexpandednodeImplementation: fringe is a priority queue sorted indecreasing order of desirabilitySpecial cases:uniform-cost search: f(n) = g(n ), the cost of node nfrom the initial node.greedy search: f(n) = h(n), the estimated cost ofnode n reaching the goal.A∗search: f(n) = g(n) + h(n).Artificial Intelligence – p.8/49Implementing Best-first Searchfunction BEST-FIRST-SEARCH( problem, EVAL-FN) returns a solution sequenceinputs: problem, a problemEval-Fn, an evaluation functionQueueing-Fn a function that orders nodes by EVAL-FNreturn GENERAL-SEARCH(problem,Queueing-Fn)function GENERAL-SEARCH( problem,QUEUING-FN) returns a solution, or failurenodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem]))loop doif nodes is empty then return failurenode REMOVE-FRONT(nodes)if GOAL-TEST[problem] applied to STATE(node) succeeds then return nodenodes QUEUING-FN(nodes,EXPAND(node,OPERATORS[problem]))endArtificial Intelligence – p.9/49Best-first Search StrategiesThere is a whole family of best-first search strategies,each with a different evaluation function.Typically, strategies use estimates of the cost ofreaching the goal and try to minimize it.Uniform Search also tries to minimize a cost measure.Is it a best-first search strategy?Not in spirit, because the evaluation function shouldincorporate a cost estimate of going from the currentstate to the closest goal state.Artificial Intelligence – p.10/49Greedy SearchEvaluation function h(n) (heuristic) is an estimate ofcost from n to the closest goal.For the n-queen problem, hq(n) = the number ofattacks between the queens.For the 8-puzzle, h8(n) = the number of tiles out ofspace comparing to the goalGreedy search expands the node that appears to beclosest to goalArtificial Intelligence – p.11/49Greedy Search ExampleArtificial Intelligence – p.12/49Properties of Greedy SearchComplete??Artificial Intelligence – p.13/49Properties of greedy searchComplete?? No–can get stuck in loops, complete infinite space with repeated-state checkingTime??Artificial Intelligence – p.14/49Properties of Greedy SearchComplete?? No–can get stuck in loops, complete infinite space with repeated-state checkingTime?? O(bm), but a good heuristic can give dramaticimprovementSpace??Artificial Intelligence – p.15/49Properties of Greedy SearchComplete?? No–can get stuck in loops, complete infinite space with repeated-state checkingTime?? O(bm), but a good heuristic can give dramaticimprovementSpace?? O(bm)—keeps all nodes in memoryOptimal??Artificial Intelligence – p.16/49A∗SearchIdea: avoid expanding paths that are already expensiveEvaluation function f(n) = g(n) + h(n)g(n) = cost so far to reach nh(n) = estimated cost to goal from nf(n) = estimated total cost of path through n to goalA∗search uses an admissible heuristici.e., h(n) ≤ h∗(n) where h∗(n) is the true


View Full Document
Download A3 Informed Search
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view A3 Informed Search and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A3 Informed Search 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?