DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 11: Reinforcement Learning

This preview shows page 1 out of 4 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 4 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2010Lecture 11: Reinforcement Learning9/30/2010Dan Klein – UC BerkeleyMany slides over the course adapted from either Stuart Russell or Andrew Moore1Reinforcement Learning Reinforcement learning: Still assume an MDP: A set of states s ∈ S A set of actions (per state) A A model T(s,a,s’) A reward function R(s,a,s’) Still looking for a policy π(s) New twist: don’t know T or R I.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn[DEMO]2Passive Learning Simplified task You don’t know the transitions T(s,a,s’) You don’t know the rewards R(s,a,s’) You are given a policy π(s) Goal: learn the state values … what policy evaluation did In this case: Learner “along for the ride” No choice about what actions to take Just execute the policy and learn from experience We’ll get to the active case soon This is NOT offline planning! You actually take actions in the world and see what happens…3Example: Direct Evaluation Episodes:xy(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(3,3) right -1(4,3) exit +100(done)(1,1) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(4,2) exit -100(done)V(2,3) ~ (96 + -103) / 2 = -3.5V(3,3) ~ (99 + 97 + -102) / 3 = 31.3γ = 1, R = -1+100-1004[DEMO – Optimal Policy]Recap: Model-Based Policy Evaluation Simplified Bellman updates to calculate V for a fixed policy: New V is expected one-step-look-ahead using current V Unfortunately, need T and R5π(s)ss, π(s)s, π(s),s’s’Model-Based Learning Idea: Learn the model empirically through experience Solve for values as if the learned model were correct Simple empirical model learning Count outcomes for each s,a Normalize to give estimate of T(s,a,s’) Discover R(s,a,s’) when we experience (s,a,s’) Solving the MDP with the learned model Iterative policy evaluation, for example6π(s)ss, π(s)s, π(s),s’s’2Example: Model-Based Learning Episodes:xyT(<3,3>, right, <4,3>) = 1 / 3T(<2,3>, right, <3,3>) = 2 / 2+100-100γ = 1(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(3,3) right -1(4,3) exit +100(done)(1,1) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(4,2) exit -100 (done)7Model-Free Learning Want to compute an expectation weighted by P(x): Model-based: estimate P(x) from samples, compute expectation Model-free: estimate expectation directly from samples Why does this work? Because samples appear with the right frequencies!8Sample-Based Policy Evaluation? Who needs T and R? Approximate the expectation with samples (drawn from T!)9π(s)ss, π(s)s1’s2’ s3’s, π(s),s’s’Almost! But we only actually make progress when we move to i+1.Temporal-Difference Learning Big idea: learn from every experience! Update V(s) each time we experience (s,a,s’,r) Likely s’ will contribute updates more often Temporal difference learning Policy still fixed! Move values toward value of whatever successor occurs: running average!10π(s)ss, π(s)s’Sample of V(s):Update to V(s):Same update:Exponential Moving Average Exponential moving average  Makes recent samples more important Forgets about the past (distant past values were wrong anyway) Easy to compute from the running average  Decreasing learning rate can give converging averages11Example: TD Policy EvaluationTake γ = 1, α = 0.5(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(3,3) right -1(4,3) exit +100(done)(1,1) up -1(1,2) up -1(1,3) right -1(2,3) right -1(3,3) right -1(3,2) up -1(4,2) exit -100(done)12[DEMO – Grid V’s]3Problems with TD Value Learning TD value leaning is a model-free way to do policy evaluation However, if we want to turn values into a (new) policy, we’re sunk: Idea: learn Q-values directly Makes action selection model-free too!ass, as,a,s’s’13Active Learning Full reinforcement learning You don’t know the transitions T(s,a,s’) You don’t know the rewards R(s,a,s’) You can choose any actions you like Goal: learn the optimal policy … what value iteration did! In this case: Learner makes choices! Fundamental tradeoff: exploration vs. exploitation This is NOT offline planning! You actually take actions in the world and find out what happens…14Detour: Q-Value Iteration Value iteration: find successive approx optimal values Start with V0*(s) = 0, which we know is right (why?) Given Vi*, calculate the values for all states for depth i+1: But Q-values are more useful! Start with Q0*(s,a) = 0, which we know is right (why?) Given Qi*, calculate the q-values for all q-states for depth i+1:15Q-Learning Q-Learning: sample-based Q-value iteration Learn Q*(s,a) values Receive a sample (s,a,s’,r) Consider your old estimate: Consider your new sample estimate: Incorporate the new estimate into a running average:[DEMO – Grid Q’s]16Q-Learning Properties Amazing result: Q-learning converges to optimal policy If you explore enough If you make the learning rate small enough … but not decrease it too quickly! Basically doesn’t matter how you select actions (!) Neat property: off-policy learning learn optimal policy without following it (some caveats)S ES E[DEMO – Grid Q’s]17Exploration / Exploitation Several schemes for forcing exploration Simplest: random actions (ε greedy) Every time step, flip a coin With probability ε, act randomly With probability 1-ε, act according to current policy Problems with random actions? You do explore the space, but keep thrashing around once learning is done One solution: lower ε over time Another solution: exploration functions184Exploration Functions When to explore Random actions: explore a fixed amount Better idea: explore areas whose badness is not (yet) established Exploration function Takes a value estimate and a count, and returns an optimistic utility, e.g. (exact form not important)19[DEMO – Auto Grid Q’s]Q-Learning Q-learning produces tables of q-values:[DEMO – Crawler


View Full Document

Berkeley COMPSCI 188 - Lecture 11: Reinforcement Learning

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 11: Reinforcement Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 11: Reinforcement Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 11: Reinforcement Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?