1CS 188: Artificial IntelligenceFall 2009Lecture 10: MDPs9/29/2009Dan Klein – UC BerkeleyMany slides over the course adapted from either Stuart Russell or Andrew Moore1Announcements P2: Due Wednesday P3: MDPs and Reinforcement Learning is up! W2: Out late this week22Recap: MDPs Markov decision processes: States S Actions A Transitions P(s’|s,a) (or T(s,a,s’)) Rewards R(s,a,s’) (and discount γ) Start state s0 Quantities: Policy = map of states to actions Episode = one run of an MDP Utility = sum of discounted rewards Values = expected future utility from a state Q-Values = expected future utility from a q-stateass, as,a,s’s’3[DEMO – MDP Quantities]Recap: Optimal Utilities The utility of a state s:V*(s) = expected utility starting in s and acting optimally The utility of a q-state (s,a):Q*(s,a) = expected utility starting in s, taking action a and thereafter acting optimally The optimal policy:π*(s) = optimal action from state s4ass’s, a(s,a,s’) is a transitions,a,s’s is a state(s, a) is a q-state3Recap: Bellman Equations Definition of utility leads to a simple one-step lookahead relationship amongst optimal utility values:Total optimal rewards = maximize over choice of (first action plus optimal future) Formally:ass, as,a,s’s’5Practice: Computing Actions Which action should we chose from state s: Given optimal values V? Given optimal q-values Q? Lesson: actions are easier to select from Q’s!6[DEMO – MDP action selection]4Value Estimates Calculate estimates Vk*(s) Not the optimal value of s! The optimal value considering only next k time steps (k rewards) As k → ∞, it approaches the optimal value Almost solution: recursion (i.e. expectimax) Correct solution: dynamic programming7[DEMO -- Vk]Memoized Recursion? Recurrences (basically truncated expectimax): Cache all function call results so you never repeat work What happened to the evaluation function?85Value Iteration Problems with the recursive computation: Have to keep all the Vk*(s) around all the time Don’t know which depth πk(s) to ask for when planning Solution: value iteration Calculate values for all states, bottom-up Keep increasing k until convergence9Value Iteration Idea: Start with V0*(s) = 0, which we know is right (why?) Given Vi*, calculate the values for all states for depth i+1: Throw out old vector Vi* Repeat until convergence This is called a value update or Bellman update Theorem: will converge to unique optimal values Basic idea: approximations get refined towards optimal values Policy may converge long before values do106Convergence* Define the max-norm: Theorem: For any two approximations U and V I.e. any distinct approximations must get closer to each other, so, in particular, any approximation must get closer to the true U and value iteration converges to a unique, stable, optimal solution Theorem: I.e. once the change in our approximation is small, it must also be close to correct11Utilities for a Fixed Policy Another basic operation: compute the utility of a state s under a fixed (generally non-optimal) policy Define the utility of a state s, under a fixed policy π:Vπ(s) = expected total discounted rewards (return) starting in s and following π Recursive relation (one-step look-ahead / Bellman equation):π(s)ss, π(s)s, π(s),s’s’12[DEMO – Right-Only Policy]7Policy Evaluation How do we calculate the V’s for a fixed policy? Idea one: turn recursive equations into updates Idea two: it’s just a linear system, solve with Matlab (or whatever)13Policy Iteration Alternative approach: Step 1: Policy evaluation: calculate utilities for some fixed policy (not optimal utilities!) until convergence Step 2: Policy improvement: update policy using one-step look-ahead with resulting converged (but not optimal!) utilities as future values Repeat steps until policy converges This is policy iteration It’s still optimal! Can converge faster under some conditions148Policy Iteration Policy evaluation: with fixed current policy π, find values with simplified Bellman updates: Iterate until values converge Policy improvement: with fixed utilities, find the best action according to one-step look-ahead15Comparison Both compute same thing (optimal values for all states) In value iteration: Every pass (or “backup”) updates both utilities (explicitly, based on current utilities) and policy (implicitly, based on current utilities) Tracking the policy isn’t necessary; we take the max In policy iteration: Several passes to update utilities with fixed policy After policy is evaluated, a new policy is chosen Together, these are dynamic programming for MDPs169Asynchronous Value Iteration* In value iteration, we update every state in each iteration Actually, any sequences of Bellman updates will converge if every state is visited infinitely often In fact, we can update the policy as seldom or often as we like, and we will still converge Idea: Update states whose value we expect to change:If is large then update predecessors of sReinforcement Learning Reinforcement learning: Still have an MDP: A set of states s ∈ S A set of actions (per state) A A model T(s,a,s’) A reward function R(s,a,s’) Still looking for a policy π(s) New twist: don’t know T or R I.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn[DEMO]1810Example: Animal Learning RL studied experimentally for more than 60 years in psychology Rewards: food, pain, hunger, drugs, etc. Mechanisms and sophistication debated Example: foraging Bees learn near-optimal foraging plan in field of artificial flowers with controlled nectar supplies Bees have a direct neural connection from nectar intake measurement to motor planning area19Example: Backgammon Reward only for win / loss in terminal states, zero otherwise TD-Gammon learns a function approximation to V(s) using a neural network Combined with depth 3 search, one of the top 3 players in the world You could imagine training Pacman this way… … but it’s tricky! (It’s also P3)2011Passive Learning Simplified task You don’t know the transitions T(s,a,s’) You don’t know the
View Full Document