Unformatted text preview:

Artificial Intelligence{Best-First Search}{Properties of Heuristic Function}{Optimality/Completeness of A* Search}{Complexity of A* Search}{How to Obtain Admissible Heuristics}{Relaxed Problems}{Local Search Algorithms}{Local Search Example: TSP}{Local Search Example: $n$-queens}{Hill-Climbing (or Gradient Descent)}{Hill-Climbing}{Simulated Annealing}{Simulated Annealing Algorithm}{Properties of Simulated Annealing}{Exercise 3.1}{Exercise 3.1}{Exercise 3.3}{Exercise 3.3}{Exercise 3.4}{Exercise 3.4}{Exercise 3.6}{Exercise 3.6}{Exercise 3.7b}{Exercise 3.7b}Artificial IntelligenceInformed Search andExplorationReadings: Chapter 4 of Russell &Norvig.Best-First SearchIdea: use a function f for each node n to estimate of“desirability”Strategy: Alwasy expand unexpanded node n with leastf(n)Implementation: fringe is a priority queue sorted indecreasing order off(n)Special cases:greedy search: f(n) = h(n)A∗search: f(n) = g(n) + h(n)where g(n) is the real cost from the initial node to n andh(n) is an estimate of the cost from n to a goal.Properties of Heuristic FunctionAdmissible: h(n) ≤ h∗(n) for any node, where h∗(n) isthe true cost from n to the nearest goal.Consistent: h(n) + c(n, a, n0) ≤ h(n0) , where n0=RESULT(A, N)(n).Optimality/Completeness of A* SearchIf the problem is solvable, A* always finds an optimalsolution whenthe standard assumptions are satisfied,the heuristic function is admissible.A* is optimally efficient for any heuristic function h: No otheroptimal strategy expands fewer nodes than A*.Complexity of A* SearchWorst-case time complexity: still exponential (O(bd)) unlessthe error inh is bounded by the logarithm of the actualpath cost.That is, unless|h(n) − h∗(n)| ≤ O(log h∗(n))where h∗(n) = actual cost from n to goal.Worst-Case Space Complexity: O(bm) as in greedybest-first.A* generally runs out of memory before running out oftime.Improvements: IDA*, SMA*.How to Obtain Admissible HeuristicsAdmissible heuristics can be derived from the exactsolution cost of a relaxed version of the problemExample: the 8-puzzleh1(n) = number of misplaced tilesHow: If the rules of the 8-puzzle are relaxed so thata tile can move anywhere, thenh1(n) gives theshortest solutionh2(n) = total Manhattan distance (i.e., number ofsquares from desired location of each tile)How: If the rules are relaxed so that a tile can moveto any adjacent square, thenh2(n) gives the shortestsolution.Key point: the optimal cost of a relaxed problem is nogreater than the optimal cost of the real problem.Relaxed ProblemsWell-known example: travelling salesperson problem (TSP)Find the shortest tour visiting all cities exactly onceMinimum spanning tree can be computed in O(n2)and is a lower bound on the shortest (open) tourLocal Search AlgorithmsIn many optimization problems, such as n–queens, pathis irrelevant; the goal state itself is the solutionThen state space = set of “complete” configurations;find optimal configuration, e.g., TSP or, findconfiguration satisfying constraints, e.g., timetableIn such cases, can use local search (or iterativeimprovement) algorithms; keep a single “current” state,try to improve it.It uses constant space, suitable for online as well asoffline search.Local Search Example: TSPTSP: Travelling Salesperson ProblemStart with any complete tour, perform pairwiseexchangesLocal Search Example: n-queensPut n queens on an n × n board with no two queens onthe same row, column, or diagonalMove a queen to reduce number of conflictsHill-Climbing (or Gradient Descent)“Like climbing Everest in thick fog with amnesia”function Hill-Climbing(problem) return statenode: current, neighbor;current := Make-Node(Initial-State(problem));loop doneighbor := highest-value-successor(current)if (Value(neighbor) < Value(current))then return State(current)else current := neighborend loopend functionThe returned state is a local maximum state.Hill-ClimbingProblem: depending on initial state, can get stuck on localmaximavaluestatesglobal maximumlocal maximumIn continuous spaces, problems w/ choosing step size, slowconvergenceSimulated AnnealingIdea: escape local maxima by allowing some “bad”moves but gradually decrease their size and frequencyRealize it by a random assignment likecurrent := neighbor with Prob(exp(v/T))where v < 0 and |v| is the “size”; T > 0 is the“frequency”.Because v < 0 and T > 0, 0 < evT< 1.How to implement such a random assignment:Generate a random real number x in [0..1].If x ≥ evT, then do the assignment; otherwise, donothing.Simulated Annealing Algorithmfunction Simulated-Annealing(problem, schedule)return statenode: current, neighbor; integer: t, T;current := Make-Node(Initial-State(problem));for t := 1 to MAX-ITERATION doT := schedule[t];if (T == 0) return State(current);neighbor := random-successor(current);if (Value(neighbor) < Value(current))then v := Value(neighbor) - Value(current);current := neighbor with Prob(exp(v/T));else current := neighbor;end forend functionProperties of Simulated AnnealingThe smaller |v|, the greater evT; the greater T , thegreaterevT.At fixed “temperature” T , state occupation probabilityreaches Boltzman distributionp(x) = αeE(x)kTT decreased slowly enough ⇒ always reach best stateIs this necessarily an interesting guarantee??Devised by Metropolis et al., 1953, for physical processmodellingWidely used in VLSI layout, airline scheduling, etc.Exercise 3.1Define in your own words the following items: state, statespace, search tree, search node, action, successorfunction, and branching factor.Exercise 3.1State: A condition of (a problem or an agent) being in astage or formState space: A collection of all statesAction: An act which causes a state to change toanother state, called successor.Successor function: returns all the successors of a stateBranching factor: The maximum number of successorsof any stateSearch tree: A graphical representation of the order ofsuccessors explored from the initial state.Search node: A node in the search tree which containsthe information of a state and its location in the tree.Exercise 3.3Suppose that LEGAL-ACTION(s) denotes the set of actionsthat are legal in states, and RESULT(a, s) denotes the statethat results from performing a legal actiona in state s.DefineSUCCESSOR-FN in terms of LEGAL-ACTIONS and RESULT,and vice versa.Exercise 3.3Suppose that LEGAL-ACTION(s) denotes the set of actionsthat are legal in states, and RESULT(a, s) denotes the statethat results from


View Full Document
Download Arti cial Intelligence
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Arti cial Intelligence and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Arti cial Intelligence 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?