DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 22: Reinforcement Learning

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188: Artificial Intelligence Spring 2006TodayRecap: MDPsBellman EquationsRecap: Value IterationPolicy IterationPolicy EvaluationPolicy ImprovementReinforcement LearningExample: Animal LearningExample: Autonomous HelicopterSlide 12Example: BackgammonPassive LearningExample: Direct EstimationModel-Based LearningExample: Model-Based LearningModel-Free LearningExample: Passive TD(Greedy) Active LearningExample: Greedy Active LearningWhat Went Wrong?Next TimeCS 188: Artificial IntelligenceSpring 2006Lecture 22: Reinforcement Learning4/11/2006Dan Klein – UC BerkeleyTodayMore MDPs: policy iterationReinforcement learningPassive learningActive learningRecap: MDPsMarkov decision processes (MDPs)A set of states s  SA model T(s,a,s’)Probability that the outcome of action a in state s is s’A reward function R(s)Solutions to an MDPA policy (s)Specifies an action for each stateWe want to find a policy which maximizes total expected utility = expected (discounted) rewardsBellman EquationsThe value of a state according to The policy according to a value UThe optimal value of a stateRecap: Value IterationIdea:Start with (bad) value estimates (e.g. U0(s) = 0)Start with corresponding (bad) policy 0(s)Update values using the Bellman relations (once)Update policy based on new valuesRepeat until convergencePolicy IterationAlternate approach:Policy evaluation: calculate exact utility values for a fixed policyPolicy improvement: update policy based on valuesRepeat until convergenceThis is policy iterationCan converge faster under some conditionsPolicy EvaluationIf we have a fixed policy , use a simplified Bellman update to calculate utilities:Unlike in value iteration, policy does not change during update processConverges to the expected utility values for this Can also solve for U with linear algebra methods instead of iterationPolicy ImprovementOnce values are correct for current policy, update the policyNote:Value iteration: update U, , U,  U, …Policy iteration: U, U, U, U… , U, U, U, U… Otherwise, basically the same!Reinforcement LearningReinforcement learning:Still have an MDP:A set of states s  SA model T(s,a,s’)A reward function R(s)Still looking for a policy (s)New twist: don’t know T or RI.e. don’t know which states are good or what the actions doMust actually try actions and states out to learnExample: Animal LearningRL studied experimentally for more than 60 years in psychologyRewards: food, pain, hunger, drugs, etc.Mechanisms and sophistication debatedExample: foragingBees learn near-optimal foraging plan in field of artificial flowers with controlled nectar suppliesBees have a direct neural connection from nectar intake measurement to motor planning areaExample: Autonomous HelicopterExample: Autonomous HelicopterExample: BackgammonReward only for win / loss in terminal states, zero otherwiseTD-Gammon learns a function approximation to U(s) using a neural networkCombined with depth 3 search, one of the top 3 players in the world(We’ll cover game playing in a few weeks)Passive LearningSimplified taskYou don’t know the transitions T(s,a,s’)You don’t know the rewards R(s)You DO know the policy (s)Goal: learn the state values (and maybe the model)In this case:No choice about what actions to takeJust execute the policy and learn from experienceWe’ll get to the general case soonExample: Direct EstimationEpisodes:xy(1,1) -1 up(1,2) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(3,3) -1 right(4,3) +100(1,1) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(4,2) -100U(1,1) ~ (92 + -106) / 2 = -7U(3,3) ~ (99 + 97 + -102) / 3 = -31.3Model-Based LearningIdea:Learn the model empirically (rather than values)Solve the MDP as if the learned model were correctEmpirical model learningSimplest case:Count outcomes for each s,aNormalize to give estimate of T(s,a,s’)Discover R(s) the first time we enter sMore complex learners are possible (e.g. if we know that all squares have related action outcomes “stationary noise”)Example: Model-Based LearningEpisodes:xy(1,1) -1 up(1,2) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(3,3) -1 right(4,3) +100(1,1) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(4,2) -100T(<3,3>, right, <4,3>) = 1 / 3T(<2,3>, right, <3,3>) = 2 / 2R(3,3) = -1, R(4,1) = 0?Model-Free LearningBig idea: why bother learning T?Update each time we experience a transitionFrequent outcomes will contribute more updates (over time)Temporal difference learning (TD)Policy still fixed!Move values toward value of whatever successor occurs[DEMO]Example: Passive TD(1,1) -1 up(1,2) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(3,3) -1 right(4,3) +100(1,1) -1 up(1,2) -1 up(1,3) -1 right(2,3) -1 right(3,3) -1 right(3,2) -1 up(4,2) -100Take  = 1,  = 0.1(Greedy) Active LearningIn general, want to learn the optimal policyIdea:Learn an initial model of the environment:Solve for the optimal policy for this model (value or policy iteration)Refine model through experience and repeatExample: Greedy Active LearningImagine we find the lower path to the good exit firstSome states will never be visited following this policy from (1,1)We’ll keep re-using this policy because following it never collects the regions of the model we need to learn the optimal policy ? ?What Went Wrong?Problem with following optimal policy for current model:Never learn about better regions of the spaceFundamental tradeoff: exploration vs. exploitationExploration: must take actions with suboptimal estimates to discover new rewards and increase eventual utilityExploitation: once the true optimal policy is learned, exploration reduces utilitySystems must explore in the beginning and exploit in the limit? ?Next TimeActive reinforcement learningQ-learningBalancing exploration / exploitationFunction approximationGeneralization for reinforcement learningModeling utilities for complex


View Full Document

Berkeley COMPSCI 188 - Lecture 22: Reinforcement Learning

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 22: Reinforcement Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 22: Reinforcement Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 22: Reinforcement Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?