Machine Learning 1010 701 15701 15 781 Spring 2008 Reinforcement learning 1 Eric Xing 3 1 2 1 1 start 1 2 3 4 Lecture 27 April 28 2008 Reading Chap 13 T M book Eric Xing 1 Outline z Intro to reinforcement learning z MDP Markov decision problem z Dynamic programming z Value iteration z Policy iteration Eric Xing 2 1 What is Learning z Learning takes place as a result of interaction between an agent and the world the idea behind learning is that z Percepts received by an agent should be used not only for understanding interpreting prediction as in the machine learning tasks we have addressed so far but also for acting and further more for improving the agent s ability to behave optimally in the future to achieve the goal Eric Xing 3 Types of Learning z Supervised Learning z A situation in which sample input output pairs of the function to be learned can be perceived or are given z You can think it as if there is a kind teacher Training data X Y features label Predict Y minimizing some loss Regression Classification z Unsupervised Learning Training data X features only Find similar points in high dim X space Clustering Eric Xing 4 2 Example of Supervised Learning z Predict the price of a stock in 6 months from now based on economic data Regression z Predict whether a patient hospitalized due to a heart attack will have a second heart attack The prediction is to be based on demographic diet and clinical measurements for that patient Logistic Regression z Identify the numbers in a handwritten ZIP code from a digitized image pixels Classification Eric Xing 5 Example of Unsupervised Learning z Eric Xing From the DNA micro array data determine which genes are most similar in terms of their expression profiles Clustering 6 3 Types of Learning Cont d z Reinforcement Learning z in the case of the agent acts on its environment it receives some evaluation of its action reinforcement but is not told of which action is the correct one to achieve its goal Training data S A R State Action Reward Develop an optimal policy sequence of decision rules for the learner so as to maximize its long term reward Robotics Board game playing programs Eric Xing 7 RL is learning from interaction Eric Xing 8 4 Examples of Reinforcement Learning z How should a robot behave so as to optimize its performance Robotics z How to automate the motion of a helicopter Control Theory z How to make a good chess playing program Artificial Intelligence Eric Xing 9 Robot in a room z what s the strategy to achieve max reward z what if the actions were deterministic Eric Xing 10 5 History of Reinforcement Learning z Roots in the psychology of animal learning Thorndike 1911 z Another independent thread was the problem of optimal control and its solution using dynamic programming Bellman 1957 z Idea of temporal difference learning on line method e g playing board games Samuel 1959 z A major breakthrough was the discovery of Q learning Watkins 1989 Eric Xing 11 What is special about RL z RL is learning how to map states to actions so as to maximize a numerical reward over time z Unlike other forms of learning it is a multistage decisionmaking process often Markovian z An RL agent must learn by trial and error Not entirely supervised but interactive z Actions may affect not only the immediate reward but also subsequent rewards Delayed effect Eric Xing 12 6 Elements of RL z A policy A map from state space to action space May be stochastic z A reward function It maps each state or state action pair to a real number called reward z A value function Value of a state or state action pair is the total expected reward starting from that state or state action pair Eric Xing 13 Policy Eric Xing 14 7 Reward for each step 2 Eric Xing 15 Reward for each step 0 1 Eric Xing 16 8 Reward for each step 0 04 Eric Xing 17 The Precise Goal z To find a policy that maximizes the Value function z transitions and rewards usually not available z There are different approaches to achieve this goal in various situations z Value iteration and Policy iteration are two more classic approaches to this problem But essentially both are dynamic programming z Q learning is a more recent approaches to this problem Essentially it is a temporal difference method Eric Xing 18 9 Markov Decision Processes A Markov decision process is a tuple Eric Xing where 19 The dynamics of an MDP z We start in some state s0 and get to choose some action a0 A z As a result of our choice the state of the MDP randomly transitions to some successor state s1 drawn according to s1 Ps0a0 z Then we get to pick another action a1 z Eric Xing 20 10 The dynamics of an MDP Cont d z Upon visiting the sequence of states s0 s0 with actions a0 a0 our total payoff is given by z Or when we are writing rewards as a function of the states only this becomes z z For most of our development we will use the simpler state rewards R s though the generalization to state action rewards R s a offers no special diculties Our goal in reinforcement learning is to choose actions over time so as to maximize the expected value of the total payoff Eric Xing 21 Policy z A policy is any function to the actions z We say that we are executing some policy if whenever we are in state s we take action a s z mapping from the states We also define the value function for a policy according to z Eric Xing V s is simply the expected sum of discounted rewards upon starting in state s and taking actions according to 22 11 Value Function z Given a fixed policy its value function V satisfies the Bellman equations Immediate reward z expected sum of future discounted rewards Bellman s equations can be used to efficiently solve for V see later Eric Xing 23 The Grid world M 0 8 in direction you want to go 0 1 left 0 2 in perpendicular 0 1 right Policy mapping from states to actions An optimal policy for the stochastic environment utilities of states 3 1 3 0 812 2 1 2 0 762 1 0 705 1 1 1 Environment 2 3 4 0 868 0 912 1 0 660 1 0 655 0 611 0 388 2 3 4 Observable accessible percept identifies the state Partially observable Markov property Transition probabilities depend on state only not on the path to the state Markov decision problem MDP Partially observable MDP POMDP percepts does not have enough info to identify transition probabilities Eric Xing 24 12 Optimal value function z We define the optimal value function according to 1 z z In other words this is the best possible expected sum of discounted rewards that can be …
View Full Document