Markov Decision Processes MDPs cont Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University November 29th 2007 1 Markov Decision Process MDP Representation State space Action space Joint state x of entire system Joint action a a1 an for all agents Reward function Total reward R x a sometimes reward can depend on action Transition model Dynamics of the entire system P x x a 2005 2007 Carlos Guestrin 2 Computing the value of a policy V x0 E R x0 R x1 2 R x2 3 R x3 4 R x4 L Discounted value of a state value of starting from x0 and continuing with policy from then on A recursion 2005 2007 Carlos Guestrin 3 Simple approach for computing the value of a policy Iteratively Can solve using a simple convergent iterative approach a k a dynamic programming Start with some guess V0 Iteratively say Stop when Vt 1 Vt 1 Vt 1 R P Vt means that V Vt 1 1 1 2005 2007 Carlos Guestrin 4 But we want to learn a Policy So far told you how good a policy is But how can we choose the best policy At state x action a for all agents Policy x a x0 x0 both peasants get wood x1 Suppose there was only one time step x1 one peasant builds barrack other gets gold x2 x2 peasants get gold footmen attack world is about to end select action that maximizes reward 2005 2007 Carlos Guestrin 5 Unrolling the recursion Choose actions that lead to best value in the long run Optimal value policy achieves optimal value V 2005 2007 Carlos Guestrin 6 Bellman equation Evaluating policy Computing the optimal value V Bellman equation V x max R x a P x x a V x a x 2005 2007 Carlos Guestrin 7 Optimal Long term Plan Optimal value function V x Optimal Policy x Optimal policy x argmax R x a P x x a V x a x 2005 2007 Carlos Guestrin 8 Interesting fact Unique value V x max R x a P x x a V x a Slightly surprising fact There is only one V that solves Bellman equation x there may be many optimal policies that achieve V Surprising fact optimal policies are good everywhere 9 2005 2007 Carlos Guestrin Solving an MDP Solve Bellman equation Optimal value V x Optimal policy x V x max R x a P x x a V x a x Bellman equation is non linear Many algorithms solve the Bellman equations Policy iteration Howard 60 Bellman 57 Value iteration Bellman 57 Linear programming Manne 60 2005 2007 Carlos Guestrin 10 Value iteration a k a dynamic programming the simplest of all V x max R x a P x x a V x a x Start with some guess V0 Iteratively say Vt 1 x max R x a P x x a Vt x a x Stop when Vt 1 Vt 1 means that V Vt 1 1 1 11 2005 2007 Carlos Guestrin A simple example 0 9 1 You run a startup company In every state you must choose between Saving money or Advertising S Poor Unknown 0 A 1 2 1 2 1 2 1 2 1 2 S A S 1 1 2 1 2 A A Rich Unknown 10 1 Poor Famous 0 1 2 S 1 2 1 2 2005 2007 Carlos Guestrin Rich Famous 10 12 Let s compute Vt x for our example 0 9 1 S Poor Unknown 0 1 2 1 2 1 2 1 2 A 1 2 1 A Rich Unknown 10 S 1 2 1 2 t Vt PU Vt PF Vt RU Vt RF 1 2 3 4 5 6 S 1 2 A S 1 2 1 Poor Famous 0 1 2 A Rich Famous 10 Vt 1 x max R x a P x x a Vt x a x 13 2005 2007 Carlos Guestrin Let s compute Vt x for our example 0 9 1 S Poor Unknown 0 1 2 1 2 1 2 1 2 S 1 A A Rich Unknown 10 A S 1 2 1 2 1 2 1 Poor Famous 0 1 2 A S 1 2 1 2 Rich Famous 10 t Vt PU Vt PF Vt RU Vt RF 1 2 3 4 5 6 0 0 2 03 3 852 7 22 10 03 0 4 5 6 53 12 20 15 07 17 65 10 14 5 25 08 29 63 32 00 33 58 10 19 18 55 19 26 20 40 22 43 Vt 1 x max R x a P x x a Vt x a x 2005 2007 Carlos Guestrin 14 What you need to know What s a Markov decision process state actions transitions rewards a policy value function for a policy computing V Optimal value function and optimal policy Bellman equation Solving Bellman equation with value iteration other possibilities policy iteration and linear programming 2005 2007 Carlos Guestrin 15 Acknowledgment This lecture contains some material from Andrew Moore s excellent collection of ML tutorials http www cs cmu edu awm tutorials 2005 2007 Carlos Guestrin 16 Reinforcement Learning Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University November 29th 2007 17 The Reinforcement Learning task World You are in state 34 Your immediate reward is 3 You have possible 3 actions Robot World I ll take action 2 You are in state 77 Your immediate reward is 7 You have possible 2 actions Robot World I ll take action 1 You re in state 34 again Your immediate reward is 3 You have possible 3 actions 2005 2007 Carlos Guestrin 18 Formalizing the online reinforcement learning problem Given a set of states X and actions A in some versions of the problem size of X and A unknown Interact with world at each time step t world gives state xt and reward rt you give next action at Goal quickly learn policy that approximately maximizes long term expected discounted reward 2005 2007 Carlos Guestrin 19 The Credit Assignment Problem I m in state 43 reward 0 action 2 39 0 4 22 0 1 21 0 1 21 0 1 13 0 2 54 0 2 26 100 Yippee I got to a state with a big reward But which of my actions along the way actually helped me get there This is the Credit Assignment problem 2005 2007 Carlos Guestrin 20 Exploration Exploitation tradeoff You have visited part of the state space and found a reward of 100 is this the best I can hope for Exploitation should I stick with what I know and find a good policy w r t this knowledge at the risk of missing out on some large reward somewhere Exploration should I look for a region with more reward at the risk of wasting my time or collecting a lot of negative reward 2005 2007 Carlos Guestrin 21 Two main reinforcement learning approaches Model based approaches explore environment then learn model P x x a and R x a almost everywhere use model to plan …
View Full Document