DOC PREVIEW
Berkeley COMPSCI 188 - Lecture Notes

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2006Lecture 10: MDPs II9/28/2006Dan Klein – UC BerkeleyAnnouncements Midterm prep page is up Project 2.1 will be up in a day or so Due after midterm But start it now, because MDPs are on the midterm! Project 1.4 (Learning Pacman) and the Pacmancontest will be after the midterm Review session TBA, check the web page2Recap: MDPs Markov decision processes: States S Actions A Transitions P(s’|s,a) (or T(s,a,s’)) Rewards R(s,a,s’) Start state s0 Examples: Gridworld, High-Low, Pacman, N-Armed Bandit Any process where the result of your action is stochastic Goal: find the “best” policy π Policies are maps from states to actions What do we mean by “best”? This is like search – it’s planning using a model, not actually interacting with the environmentExample: Autonomous Helicopter3Example: High-Low3HighLow243High LowHigh LowHighLow3, High, Low3T = 0.5, R = 2T = 0.25, R = 3T = 0, R = 4T = 0.25, R = 0MDP Search Trees Can view an MDP as a branching search treeass’s, a(s,a,s’) called a transitionT(s,a,s’) = P(s’|s,a)R(s,a,s’)s,a,s’s is a state(s, a) is a q-state4Discounting Typically discount rewards by γ < 1 each time step Sooner rewards have higher utility than later rewards Also helps the algorithms convergeRecap: MDP Quantities Return = Sum of futurediscounted rewardsin one episode(stochastic) V: Expected return from a state under a policy Q: Expected return from a q-state under a policyass, as,a,s’s’5Solving MDPs We want to find the optimal policy π Option 1: modified expectimax search:ass, as,a,s’s’MDP Search Trees Problems: This tree is usually infinite (why?) The same states appear over and over (why?) Solution: Compute to a finite depth (like expectimax) Consider returns from sequences of increasing length Cache values so we don’t repeat work6Value Estimates Calculate estimates Vk*(s) Not the optimal value of s! The optimal value considering only next k time steps (k rewards) As k →∞, it approaches the optimal value Why: If discounting, distant rewards become negligible If terminal states reachable from everywhere, fraction of episodes not ending becomes negligible Otherwise, can get infinite expected utility and then this approach actually won’t workMemoized Recursion Recurrences: Cache all function call results so you never repeat work What happened to the evaluation function?7Value Iteration Problems with the recursive computation: Have to keep all the Vk*(s) around all the time Don’t know which depth πk(s) to ask for when planning Solution: value iteration Calculate values for all states, bottom-up Keep increasing k until convergenceThe Bellman Equations Definition of utility leads to a simple relationship amongst optimal utility values:Optimal rewards = maximize over first action and then follow optimal policy Formally:That’s my equation!8Value Iteration Idea: Start with V0*(s) = 0, which we know is right (why?) Given Vi*, calculate the values for all states for depth i+1: This is called a value update or Bellman update Repeat until convergence Theorem: will converge to unique optimal values Basic idea: approximations get refined towards optimal values Policy may converge long before values doExample: Bellman Updates9Example: Value Iteration Information propagates outward from terminal states and eventually all states have correct value estimates[DEMO]V2V3Convergence* Define the max-norm: Theorem: For any two approximations U and V I.e. any distinct approximations must get closer to each other, so, in particular, any approximation must get closer to the true U and value iteration converges to a unique, stable, optimal solution Theorem: I.e. one the change in our approximation is small, it must also be close to correct10Policy Iteration Alternate approach: Policy evaluation: calculate utilities for a fixed policy until convergence (remember the beginning of lecture) Policy improvement: update policy based on resulting converged utilities Repeat until policy converges This is policy iteration Can converge faster under some conditionsPolicy Iteration If we have a fixed policy π, use simplified Bellman equation to calculate utilities: For fixed utilities, easy to find the best action according to one-step look-ahead11Comparison In value iteration: Every pass (or “backup”) updates both utilities (explicitly, based on current utilities) and policy (possibly implicitly, based on current policy) In policy iteration: Several passes to update utilities with frozen policy Occasional passes to update policies Hybrid approaches (asynchronous policy iteration): Any sequences of partial updates to either policy entries or utilities will converge if every state is visited infinitely oftenReinforcement Learning Reinforcement learning: Still have an MDP: A set of states s ∈ S A model T(s,a,s’) A reward function R(s) Still looking for a policy π(s) New twist: don’t know T or R I.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn12Example: Animal Learning RL studied experimentally for more than 60 years in psychology Rewards: food, pain, hunger, drugs, etc. Mechanisms and sophistication debated Example: foraging Bees learn near-optimal foraging plan in field of artificial flowers with controlled nectar supplies Bees have a direct neural connection from nectar intake measurement to motor planning areaExample: Backgammon Reward only for win / loss in terminal states, zero otherwise TD-Gammon learns a function approximation to U(s) using a neural network Combined with depth 3 search, one of the top 3 players in the world You could imagine training Pacman this way… … but it’s tricky!13Passive Learning Simplified task You don’t know the transitions T(s,a,s’) You don’t know the rewards R(s,a,s’) You are given a policy π(s) Goal: learn the state values (and maybe the model) In this case: No choice about what actions to take Just execute the policy and learn from experience We’ll get to the general case soonExample: Direct Estimation Episodes:xy(1,1) up -1(1,2) up -1(1,2) up -1(1,3) right


View Full Document

Berkeley COMPSCI 188 - Lecture Notes

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?