Reinforcement Learning Some slides taken from previous 10701 recitations lectures A Fully Deterministic World R 50 R 100 World R 50 R 100 World R 50 R 100 Long Term Reward Total Reward Reward is discounted by the time I obtained it value r t 0 8 t t Start here 50 R 50 R 100 Long Term Reward Total Reward Reward is discounted by the time I obtained it value r t 0 8 t t 40 50 R 50 R 100 Start here 32 We can Reuse Computation Value of a Policy if I run for 0 time steps 0 0 R 50 0 R 100 0 0 V0 0 Value of a Policy if I run for 1 time step 0 50 R 50 0 R 100 0 0 V1 100 Value of a Policy if I run for 2 time steps 40 50 R 50 0 R 100 0 80 V2 100 Value of a Policy if I run for 3 time steps 40 50 R 50 0 R 100 64 80 V3 100 Non deterministic World 0 50 R 50 0 R 100 P 0 3 P 0 7 0 0 V1 100 Non deterministic World 0 50 R 50 0 R 100 P 0 3 P 0 7 0 0 V1 100 Non deterministic World 40 50 R 50 0 R 100 P 0 3 P 0 7 0 56 V2 100 Non deterministic World 40 50 R 50 0 R 100 P 0 3 P 0 7 44 8 69 44 V3 100 Value Iteration Immediate reward of following policy Discounted future reward Find BEST Policy Ask the question in a slightly different way What is the Value of the Best Policy Immediate reward of following policy Immediate reward of following policy Discounted future reward Discounted future reward Find BEST Policy What is the Value of the Best Policy Immediate reward of following policy Discounted future reward The optimal policy is optimal at every state Policy Learning Example 0 R 50 0 R 100 P 0 3 P 0 7 0 0 0 Policy Learning Example 0 R 50 0 R 100 P 0 3 P 0 7 0 0 0 Policy Learning Example 0 50 R 50 0 R 100 P 0 3 P 0 7 0 0 Policy Learning Example 0 50 R 50 0 R 100 P 0 3 P 0 7 0 0 Policy Learning Example 0 50 R 50 0 R 100 P 0 3 P 0 7 0 40 Policy Learning Example 50 R 50 0 R 100 P 0 3 P 0 7 0 40 100 Policy Learning Example 40 R 50 0 R 100 P 0 3 P 0 7 0 40 100 Backgammon Something is wrong here Backgammon Dealing with huge state spaces Estimate V x instead of x Approximate V x using a neural net 0 except when you win or lose Can be estimated from our current network In this case P x x a a is 0 or 1 for all x Since V is a neural net we can t set the value V x Instead use target V x as a training example for the NN Can t visit every state so instead play games against yourself to visit the most likely ones Unknown World Do not know the transitions Do not know the probabilities Do not know the rewards Only know a state when we actually get there Possible Questions 1 I am in state X What is the value of following a particular policy 2 What is the best policy Value of Policy If I know the rewards If I do not know the rewards V t 1 x t r t V t x t 1 1 V t x t Learning a Policy Q Learning Define Q which estimates both values and rewards Where is the result of taking action a in state s Learning a Policy Q Learning Estimate Q the same way we estimated V V t 1 x t r t V t x t 1 1 V t x t Q t 1 t t x t at r t max a Q x t 1 a 1 Q x t at Q Learning Example 8 5 R 0 0 0 0 0 0 0 0 0 0 0 0 Q Learning Example 8 5 R 50 0 0 0 0 0 0 0 0 0 0 0 Q Learning Example 8 5 25 0 0 0 0 0 0 0 0 0 0 0 Q Learning Example 8 5 R 0 25 0 0 0 0 0 0 0 0 0 0 Q Learning Example 8 5 R 50 10 0 0 0 0 0 0 0 0 0 0 Q Learning Example 8 5 37 5 10 0 0 0 0 0 0 0 0 0 0
View Full Document