DOC PREVIEW
UCI ICS 171 - Informed Search Algorithms

This preview shows page 1-2-3-4-5-6 out of 17 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 17 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Informed search algorithms Chapter 4Local search algorithms  In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution  State space = set of "complete" configurations  Find configuration satisfying constraints, e.g., n-queens  In such cases, we can use local search algorithms  keep a single "current" state, try to improve it.  Very memory efficient (only remember current state)Example: n-queens  Put n queens on an n × n board with no two queens on the same row, column, or diagonal Note that a state cannot be an incomplete configuration with m<n queensHill-climbing search  Problem: depending on initial state, can get stuck in local maximaGradient Descent • Assume we have some cost-function: and we want minimize over continuous variables X1,X2,..,Xn 1. Compute the gradient : 2. Take a small step downhill in the direction of the gradient: 3. Check if 4. If true then accept move, if not reject. 5. Repeat.Exercise • Describe the gradient descent algorithm for the cost function:Line Search • In GD you need to choose a step-size. • Line search picks a direction, v, (say the gradient direction) and searches along that direction for the optimal step: • Repeated doubling can be used to effectively search for the optimal step: • There are many methods to pick search direction v. Very good method is “conjugate gradients”. € η* = argmin C(xt+ηvt)€ η→2η→4η→8η (until cost increases)Hill-climbing search: 8-queens problem  h = number of pairs of queens that are attacking each other, either directly or indirectly (h = 17 for the above state) Each number indicates h if we move a queen in its corresponding columnHill-climbing search: 8-queens problem  A local minimum with h = 1 what can you do to get out of this local minima?)Simulated annealing search  Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency.  This is like smoothing the cost landscape.Properties of simulated annealing search  One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 (however, this may take VERY long)  Widely used in VLSI layout, airline scheduling, etc.Local beam search  Keep track of k states rather than just one.  Start with k randomly generated states.  At each iteration, all the successors of all k states are generated.  If any one is a goal state, stop; else select the k best successors from the complete list and repeat.Genetic algorithms  A successor state is generated by combining two parent states  Start with k randomly generated states (population)  A state is represented as a string over a finite alphabet (often a string of 0s and 1s)  Evaluation function (fitness function). Higher values for better states.  Produce the next generation of states by selection, crossover, and mutation Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)  P(child) = 24/(24+23+20+11) = 31%  P(child) = 23/(24+23+20+11) = 29% etc fitness: #non-attacking queens probability of being regenerated in next generationGiven N cities and all their distances, find the shortest tour through all cities. Try formulating this as a search problem. Ie what are the states, step-cost, initial state, goal state, successor function. Can you think of ways to try to solve these problems? Travelling Salesman


View Full Document

UCI ICS 171 - Informed Search Algorithms

Documents in this Course
Prolog

Prolog

16 pages

PROJECT

PROJECT

3 pages

Quiz 6

Quiz 6

9 pages

Load more
Download Informed Search Algorithms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Informed Search Algorithms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Informed Search Algorithms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?