CS 188: Artificial Intelligence Fall 2009AnnouncementsReinforcement LearningPassive LearningRecap: Model-Based Policy EvaluationModel-Based LearningExample: Model-Based LearningModel-Free LearningExample: Direct EstimationSample-Based Policy Evaluation?Temporal-Difference LearningExponential Moving AverageExample: TD Policy EvaluationProblems with TD Value LearningActive LearningDetour: Q-Value IterationQ-LearningQ-Learning PropertiesExploration / ExploitationExploration FunctionsSlide 23The Story So Far: MDPs and RLSlide 25Example: PacmanFeature-Based RepresentationsLinear Feature FunctionsFunction ApproximationExample: Q-PacmanLinear regressionSlide 32Ordinary Least Squares (OLS)Minimizing ErrorOverfittingPolicy SearchSlide 37Slide 38Policy Search*Take a Deep Breath…CS 188: Artificial IntelligenceFall 2009Lecture 11: Reinforcement Learning10/1/2009Dan Klein – UC BerkeleyMany slides over the course adapted from either Stuart Russell or Andrew Moore1AnnouncementsP0 / P1 in glookupIf you have no entry, etc, email staff list!If you have questions, see one of us or email list.P3: MDPs and Reinforcement Learning is up!W2: MDPs, RL, and Probability up before next class2Reinforcement LearningReinforcement learning:Still assume an MDP:A set of states s SA set of actions (per state) AA model T(s,a,s’)A reward function R(s,a,s’)Still looking for a policy (s)New twist: don’t know T or RI.e. don’t know which states are good or what the actions doMust actually try actions and states out to learn[DEMO]5Passive LearningSimplified taskYou don’t know the transitions T(s,a,s’)You don’t know the rewards R(s,a,s’)You are given a policy (s)Goal: learn the state values… what policy evaluation didIn this case:Learner “along for the ride”No choice about what actions to takeJust execute the policy and learn from experienceWe’ll get to the active case soonThis is NOT offline planning! You actually take actions in the world and see what happens…6Recap: Model-Based Policy EvaluationSimplified Bellman updates to calculate V for a fixed policy:New V is expected one-step-look-ahead using current VUnfortunately, need T and R7(s)ss, (s)s, (s),s’s’Model-Based LearningIdea:Learn the model empirically through experienceSolve for values as if the learned model were correctSimple empirical model learningCount outcomes for each s,aNormalize to give estimate of T(s,a,s’)Discover R(s,a,s’) when we experience (s,a,s’)Solving the MDP with the learned modelIterative policy evaluation, for example8(s)ss, (s)s, (s),s’s’Example: Model-Based LearningEpisodes:xyT(<3,3>, right, <4,3>) = 1 / 3T(<2,3>, right, <3,3>) = 2 / 2+100-100 = 1(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(3,3) right -1(4,3) exit +100(done)(1,1) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(4,2) exit -100 (done)9Model-Free LearningWant to compute an expectation weighted by P(x):Model-based: estimate P(x) from samples, compute expectationModel-free: estimate expectation directly from samplesWhy does this work? Because samples appear with the right frequencies!10Example: Direct EstimationEpisodes:xy(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(3,3) right -1(4,3) exit +100(done)(1,1) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(4,2) exit -100(done)V(2,3) ~ (96 + -103) / 2 = -3.5V(3,3) ~ (99 + 97 + -102) / 3 = 31.3 = 1, R = -1 +100-10011[DEMO – Optimal Policy]Sample-Based Policy Evaluation?Who needs T and R? Approximate the expectation with samples (drawn from T!)12(s)ss, (s)s1’s2’s3’s, (s),s’s’Almost! But we only actually make progress when we move to i+1.Temporal-Difference LearningBig idea: learn from every experience!Update V(s) each time we experience (s,a,s’,r)Likely s’ will contribute updates more oftenTemporal difference learningPolicy still fixed!Move values toward value of whatever successor occurs: running average!13(s)ss, (s)s’Sample of V(s):Update to V(s):Same update:Exponential Moving AverageExponential moving average Makes recent samples more importantForgets about the past (distant past values were wrong anyway)Easy to compute from the running average Decreasing learning rate can give converging averages14Example: TD Policy EvaluationTake = 1, = 0.5(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(3,3) right -1(4,3) exit +100(done)(1,1) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(4,2) exit -100(done)15[DEMO – Grid V’s]Problems with TD Value LearningTD value leaning is a model-free way to do policy evaluationHowever, if we want to turn values into a (new) policy, we’re sunk:Idea: learn Q-values directlyMakes action selection model-free too!ass, as,a,s’s’16Active LearningFull reinforcement learningYou don’t know the transitions T(s,a,s’)You don’t know the rewards R(s,a,s’)You can choose any actions you likeGoal: learn the optimal policy… what value iteration did!In this case:Learner makes choices!Fundamental tradeoff: exploration vs. exploitationThis is NOT offline planning! You actually take actions in the world and find out what happens…17Detour: Q-Value IterationValue iteration: find successive approx optimal valuesStart with V0*(s) = 0, which we know is right (why?)Given Vi*, calculate the values for all states for depth i+1:But Q-values are more useful!Start with Q0*(s,a) = 0, which we know is right (why?)Given Qi*, calculate the q-values for all q-states for depth i+1:18Q-LearningQ-Learning: sample-based Q-value iterationLearn Q*(s,a) valuesReceive a sample (s,a,s’,r)Consider your old estimate:Consider your new sample estimate:Incorporate the new estimate into a running average:[DEMO – Grid Q’s]19Q-Learning PropertiesAmazing result: Q-learning converges to optimal policyIf you explore enoughIf you make the learning rate small enough… but not decrease it too quickly!Basically doesn’t matter how you select actions (!)Neat property: off-policy learninglearn optimal policy without following it (some caveats)S ES E[DEMO – Grid
View Full Document