DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 10: MDPs

This preview shows page 1-2-14-15-29-30 out of 30 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 30 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2008Lecture 10: MDPsLecture 10: MDPs9/30/2008Dan Klein – UC BerkeleyMany slides over the course adapted from either Stuart Russell or Andrew Moore12Recap: MDPs Markov decision processes: States S Actions A Transitions P(s’|s,a) (or T(s,a,s’)) Rewards R(s,a,s’) (and discount γ)Start state s0ass, as,a,s’Start state s0 Quantities: Policy = map of states to actions Episode = one run of an MDPs Returns = sum of discounted rewards Values = expected future returns from a state Q-Values = expected future returns from a q-states,a,s’s’3[DEMO – Grid Values]3Optimal Utilities Fundamental operation: compute the optimal utilities of all states s Why? Optimal values define optimal policies!Define the utility of a state s:ass, as,a,s’Define the utility of a state s:V*(s) = expected return starting in s and acting optimally Define the utility of a q-state (s,a):Q*(s) = expected return starting in s, taking action a and thereafter acting optimally Define the optimal policy:π*(s) = optimal action from state ss,a,s’s’44The Bellman Equations Definition of utility leads to a simple one-step lookahead relationship amongst optimal utility values:Optimal rewards = maximize over first action and then follow optimal policyass, as,a,s’action and then follow optimal policy Formally:s,a,s’s’55Practice: Computing Actions Which action should we chose from state s: Given optimal values V? Given optimal q-values Q? Lesson: actions are easier to select from Q’s!6[DEMO – Grid Policies]6Value Estimates Calculate estimates Vk*(s) Not the optimal value of s! The optimal value considering only next k time steps (k rewards)As k →∞, it approaches the As k →∞, it approaches the optimal value Why: If discounting, distant rewards become negligible If terminal states reachable from everywhere, fraction of episodes not ending becomes negligible Otherwise, can get infinite expected utility and then this approach actually won’t work9[DEMO -- Vk]7Memoized Recursion? Recurrences: Cache all function call results so you never repeat work What happened to the evaluation function?108Value Iteration Problems with the recursive computation: Have to keep all the Vk*(s) around all the time Don’t know which depth πk(s) to ask for when planning Solution: value iteration Calculate values for all states, bottom-up Keep increasing k until convergence119Value Iteration Idea: Start with V0*(s) = 0, which we know is right (why?) Given Vi*, calculate the values for all states for depth i+1: Throw out old vector Vi* This is called a value update or Bellman update Repeat until convergence Theorem: will converge to unique optimal values Basic idea: approximations get refined towards optimal values Policy may converge long before values do1210Example: Bellman Updates1311Example: Value IterationV2V3 Information propagates outward from terminal states and eventually all states have correct value estimates[DEMO]1412Convergence* Define the max-norm: Theorem: For any two approximations U and V I.e. any distinct approximations must get closer to each other, so, in particular, any approximation must get closer to the true U and value iteration converges to a unique, stable, optimal solution Theorem: I.e. once the change in our approximation is small, it must also be close to correct1513Utilities for Fixed Policies Another basic operation: compute the utility of a state s under a fixed (generally non-optimal) policy Define the utility of a state s, under a fixed policy π:π(s)ss, π(s)s, π(s),s’fixed policy π:Vπ(s) = expected total discounted rewards (return) starting in s and following π Recursive relation (one-step look-ahead / Bellman equation):s’16[DEMO – Right-Only Policy]14Policy Evaluation How do we calculate the V’s for a fixed policy? Idea one: turn recursive equations into updates Idea two: it’s just a linear system, solve with Matlab (or whatever)1715Policy Iteration Alternative approach: Step 1: Policy evaluation: calculate utilities for a fixed policy (not optimal utilities!) until convergence Step 2: Policy improvement: update policy using one-step lookaheadwith resulting converged (but not step lookaheadwith resulting converged (but not optimal!) utilities Repeat steps until policy converges This is policy iteration It’s still optimal! Can converge faster under some conditions1816Policy Iteration Policy evaluation: with fixed current policy π, find values with simplified Bellman updates: Iterate until values converge Policy improvement: with fixed utilities, find the best action according to one-step look-ahead1917Comparison In value iteration: Every pass (or “backup”) updates both utilities (explicitly, based on current utilities) and policy (possibly implicitly, based on current policy)In policy iteration:In policy iteration: Several passes to update utilities with frozen policy Policy evaluation passes are faster than value iteration passes (why?) Occasional passes to update policies Hybrid approaches (asynchronous policy iteration): Any sequences of partial updates to either policy entries or utilities will converge if every state is visited infinitely often2018Reinforcement Learning Reinforcement learning: Still have an MDP: A set of states s ∈ S A set of actions (per state) AA model T(s,a,s’)A model T(s,a,s’) A reward function R(s,a,s’) Still looking for a policy π(s) New twist: don’t know T or R I.e. don’t know which states are good or what the actions do Must actually try actions and states out to learn[DEMO]2119Example: Animal Learning RL studied experimentally for more than 60 years in psychology Rewards: food, pain, hunger, drugs, etc. Mechanisms and sophistication debated Example: foraging Bees learn near-optimal foraging plan in field of artificial flowers with controlled nectar supplies Bees have a direct neural connection from nectar intake measurement to motor planning area2220Example: Backgammon Reward only for win / loss in terminal states, zero otherwise TD-Gammon learns a function approximation to V(s) using a neural networkV(s) using a neural network Combined with depth 3 search, one of the top 3


View Full Document

Berkeley COMPSCI 188 - Lecture 10: MDPs

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 10: MDPs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 10: MDPs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 10: MDPs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?