CS 188: Artificial Intelligence Spring 2006TodayRecap: MDPsBellman EquationsRecap: Value IterationPolicy IterationPolicy EvaluationPolicy ImprovementReinforcement LearningExample: Animal LearningExample: Autonomous HelicopterSlide 12Example: BackgammonPassive LearningExample: Direct EstimationModel-Based LearningExample: Model-Based LearningModel-Free LearningExample: Passive TD(Greedy) Active LearningExample: Greedy Active LearningWhat Went Wrong?Next TimeCS 188: Artificial IntelligenceSpring 2006Lecture 22: Reinforcement Learning4/11/2006Dan Klein – UC BerkeleyTodayMore MDPs: policy iterationReinforcement learningPassive learningActive learningRecap: MDPsMarkov decision processes (MDPs)A set of states s SA model T(s,a,s’)Probability that the outcome of action a in state s is s’A reward function R(s)Solutions to an MDPA policy (s)Specifies an action for each stateWe want to find a policy which maximizes total expected utility = expected (discounted) rewardsBellman EquationsThe value of a state according to The policy according to a value UThe optimal value of a stateRecap: Value IterationIdea:Start with (bad) value estimates (e.g. U0(s) = 0)Start with corresponding (bad) policy 0(s)Update values using the Bellman relations (once)Update policy based on new valuesRepeat until convergencePolicy IterationAlternate approach:Policy evaluation: calculate exact utility values for a fixed policyPolicy improvement: update policy based on valuesRepeat until convergenceThis is policy iterationCan converge faster under some conditionsPolicy EvaluationIf we have a fixed policy , use a simplified Bellman update to calculate utilities:Unlike in value iteration, policy does not change during update processConverges to the expected utility values for this Can also solve for U with linear algebra methods instead of iterationPolicy ImprovementOnce values are correct for current policy, update the policyNote:Value iteration: update U, , U, U, …Policy iteration: U, U, U, U… , U, U, U, U… Otherwise, basically the same!Reinforcement LearningReinforcement learning:Still have an MDP:A set of states s SA model T(s,a,s’)A reward function R(s)Still looking for a policy (s)New twist: don’t know T or RI.e. don’t know which states are good or what the actions doMust actually try actions and states out to learnExample: Animal LearningRL studied experimentally for more than 60 years in psychologyRewards: food, pain, hunger, drugs, etc.Mechanisms and sophistication debatedExample: foragingBees learn near-optimal foraging plan in field of artificial flowers with controlled nectar suppliesBees have a direct neural connection from nectar intake measurement to motor planning areaExample: Autonomous HelicopterExample: Autonomous HelicopterExample: BackgammonReward only for win / loss in terminal states, zero otherwiseTD-Gammon learns a function approximation to U(s) using a neural networkCombined with depth 3 search, one of the top 3 players in the world(We’ll cover game playing in a few weeks)Passive LearningSimplified taskYou don’t know the transitions T(s,a,s’)You don’t know the rewards R(s)You DO know the policy (s)Goal: learn the state values (and maybe the model)In this case:No choice about what actions to takeJust execute the policy and learn from experienceWe’ll get to the general case soonExample: Direct EstimationEpisodes:xy(1,1) -1 up(1,2) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(3,3) -1 right(4,3) +100(1,1) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(4,2) -100U(1,1) ~ (92 + -106) / 2 = -7U(3,3) ~ (99 + 97 + -102) / 3 = -31.3Model-Based LearningIdea:Learn the model empirically (rather than values)Solve the MDP as if the learned model were correctEmpirical model learningSimplest case:Count outcomes for each s,aNormalize to give estimate of T(s,a,s’)Discover R(s) the first time we enter sMore complex learners are possible (e.g. if we know that all squares have related action outcomes “stationary noise”)Example: Model-Based LearningEpisodes:xy(1,1) -1 up(1,2) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(3,3) -1 right(4,3) +100(1,1) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(4,2) -100T(<3,3>, right, <4,3>) = 1 / 3T(<2,3>, right, <3,3>) = 2 / 2R(3,3) = -1, R(4,1) = 0?Model-Free LearningBig idea: why bother learning T?Update each time we experience a transitionFrequent outcomes will contribute more updates (over time)Temporal difference learning (TD)Policy still fixed!Move values toward value of whatever successor occurs[DEMO]Example: Passive TD(1,1) -1 up(1,2) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(3,3) -1 right(4,3) +100(1,1) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(4,2) -100Take = 1, = 0.1(Greedy) Active LearningIn general, want to learn the optimal policyIdea:Learn an initial model of the environment:Solve for the optimal policy for this model (value or policy iteration)Refine model through experience and repeatExample: Greedy Active LearningImagine we find the lower path to the good exit firstSome states will never be visited following this policy from (1,1)We’ll keep re-using this policy because following it never collects the regions of the model we need to learn the optimal policy ? ?What Went Wrong?Problem with following optimal policy for current model:Never learn about better regions of the spaceFundamental tradeoff: exploration vs. exploitationExploration: must take actions with suboptimal estimates to discover new rewards and increase eventual utilityExploitation: once the true optimal policy is learned, exploration reduces utilitySystems must explore in the beginning and exploit in the limit? ?Next TimeActive reinforcement learningQ-learningBalancing exploration / exploitationFunction approximationGeneralization for reinforcement learningModeling utilities for complex
View Full Document