CS 188: Artificial Intelligence Fall 2007AnnouncementsLocal SearchHill ClimbingHill Climbing DiagramSimulated AnnealingSlide 7Beam SearchGenetic AlgorithmsExample: N-QueensAdversarial SearchGame Playing State-of-the-ArtGame PlayingDeterministic Single-Player?Deterministic Two-PlayerTic-tac-toe Game TreeMinimax ExampleMinimax SearchMinimax PropertiesResource LimitsEvaluation FunctionsEvaluation for PacmanIterative Deepening- Pruning Example- Pruning- Pruning Pseudocode- Pruning PropertiesNon-Zero-Sum GamesStochastic Single-PlayerStochastic Two-PlayerSlide 32What’s Next?CS 188: Artificial IntelligenceFall 2007Lecture 7: Adversarial Search9/18/2007Dan Klein – UC BerkeleyMany slides over the course adapted from either Stuart Russell or Andrew MooreAnnouncementsProject 2 is up (Multi-Agent Pacman)SVN groups coming, watch web pageDan’s office hours Tuesday moving to Friday 1-2pmLocal SearchQueue-based algorithms keep fallback options (backtracking)Local search: improve what you have until you can’t make it betterGenerally much more efficient (but incomplete)Hill ClimbingSimple, general idea:Start whereverAlways choose the best neighborIf no neighbors have better scores than current, quitWhy can this be a terrible idea?Complete?Optimal?What’s good about it?Hill Climbing DiagramRandom restarts?Random sideways steps?Simulated AnnealingIdea: Escape local maxima by allowing downhill movesBut make them rarer as time goes onSimulated AnnealingTheoretical guarantee:Stationary distribution:If T decreased slowly enough,will converge to optimal state!Is this an interesting guarantee?Sounds like magic, but reality is reality:The more downhill steps you need to escape, the less likely you are to every make them all in a rowPeople think hard about ridge operators which let you jump around the space in better waysBeam SearchLike hill-climbing search, but keep K states at all times:Variables: beam size, encourage diversity?The best choice in MANY practical settingsComplete? Optimal?Why do we still need optimal methods?Greedy Search Beam SearchGenetic AlgorithmsGenetic algorithms use a natural selection metaphorLike beam search (selection), but also have pairwise crossover operators, with optional mutationProbably the most misunderstood, misapplied (and even maligned) technique around!Example: N-QueensWhy does crossover make sense here?When wouldn’t it make sense?What would mutation be?What would a good fitness function be?Adversarial Search[DEMO 1]Game Playing State-of-the-ArtCheckers: Chinook ended 40-year-reign of human world champion Marion Tinsley in 1994. Used an endgame database defining perfect play for all positions involving 8 or fewer pieces on the board, a total of 443,748,401,247 positions. Checkers is now solved!Chess: Deep Blue defeated human world champion Gary Kasparov in a six-game match in 1997. Deep Blue examined 200 million positions per second, used very sophisticated evaluation and undisclosed methods for extending some lines of search up to 40 ply.Othello: human champions refuse to compete against computers, which are too good.Go: human champions refuse to compete against computers, which are too bad. In go, b > 300, so most programs use pattern knowledge bases to suggest plausible moves.Pacman: unknownGame PlayingAxes:Deterministic or stochastic?One, two or more players?Perfect information (can you see the state)?Want algorithms for calculating a strategy (policy) which recommends a move in each stateDeterministic Single-Player?Deterministic, single player, perfect information:Know the rulesKnow what actions doKnow when you winE.g. Freecell, 8-Puzzle, Rubik’s cube… it’s just search!Slight reinterpretation:Each node stores the best outcome it can reachThis is the maximal outcome of its childrenNote that we don’t store path sums as beforeAfter search, can pick move that leads to best nodewin loseloseDeterministic Two-PlayerE.g. tic-tac-toe, chess, checkersMinimax searchA state-space search treePlayers alternateEach layer, or ply, consists of a round of movesChoose move to position with highest minimax value = best achievable utility against best playZero-sum gamesOne player maximizes resultThe other minimizes result8 2 5 6maxminTic-tac-toe Game TreeMinimax ExampleMinimax SearchMinimax PropertiesOptimal against a perfect player. Otherwise?Time complexity?O(bm)Space complexity?O(bm)For chess, b 35, m 100Exact solution is completely infeasibleBut, do we need to explore the whole tree?10 10 9 100maxmin[DEMO2: minVsExp]Resource LimitsCannot search to leavesLimited searchInstead, search a limited depth of the treeReplace terminal utilities with an eval function for non-terminal positionsGuarantee of optimal play is goneMore plies makes a BIG difference[DEMO 3: limitedDepth]Example:Suppose we have 100 seconds, can explore 10K nodes / secSo can check 1M nodes per move- reaches about depth 8 – decent chess program? ? ? ?-1 -2 4 94min minmax-2 4Evaluation FunctionsFunction which scores non-terminalsIdeal function: returns the utility of the positionIn practice: typically weighted linear sum of features:e.g. f1(s) = (num white queens – num black queens), etc.Evaluation for Pacman[DEMO4: evalFunction, thrashing]Iterative DeepeningIterative deepening uses DFS as a subroutine:1. Do a DFS which only searches for paths of length 1 or less. (DFS gives up on any path of length 2)2. If “1” failed, do a DFS which only searches paths of length 2 or less.3. If “2” failed, do a DFS which only searches paths of length 3 or less.….and so on.This works for single-agent search as well!Why do we want to do this for multiplayer games?…b- Pruning Example- PruningGeneral configuration is the best value the MAX can get at any choice point along the current pathIf n is worse than , MAX will avoid it, so prune n’s branchDefine similarly for MINPlayerOpponentPlayerOpponentn- Pruning Pseudocodev- Pruning PropertiesPruning has no effect on final resultGood move ordering improves effectiveness of pruningWith “perfect ordering”:Time complexity drops to O(bm/2)Doubles solvable depthFull search of,
View Full Document