DOC PREVIEW
Berkeley COMPSCI 188 - Lecture 9: MDPs

This preview shows page 1-2-3-25-26-27 out of 27 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 27 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188: Artificial Intelligence Fall 2006Reinforcement LearningMarkov Decision ProcessesExample: High-LowHigh-LowMDP SolutionsExample Optimal PoliciesStationarityInfinite Utilities?!How (Not) to Solve an MDPUtility of a StatePolicy EvaluationSlide 13Example: GridWorldQ-FunctionsOptimal UtilitiesPractice: Computing ActionsThe Bellman EquationsSlide 19Value IterationExample: Bellman UpdatesExample: Value IterationConvergence*Policy IterationSlide 25ComparisonNext ClassCS 188: Artificial IntelligenceFall 2006Lecture 9: MDPs9/26/2006Dan Klein – UC BerkeleyReinforcement Learning[DEMOS]Basic idea:Receive feedback in the form of rewardsAgent’s utility is defined by the reward functionMust learn to act so as to maximize expected rewardsChange the rewards, change the behaviorExamples:Playing a game, reward at the end for winning / losingVacuuming a house, reward for each piece of dirt picked upAutomated taxi, reward for each passenger deliveredMarkov Decision ProcessesMarkov decision processes (MDPs)A set of states s  SA model T(s,a,s’) = P(s’ | s,a)Probability that action a in state s leads to s’A reward function R(s, a, s’) (sometimes just R(s) for leaving a state or R(s’) for entering one)A start state (or distribution)Maybe a terminal stateMDPs are the simplest case of reinforcement learningIn general reinforcement learning, we don’t know the model or the reward functionExample: High-LowThree card types: 2, 3, 4Infinite deck, twice as many 2’sStart with 3 showingAfter each card, you say “high” or “low”New card is flippedIf you’re right, you win the points shown on the new cardTies are no-opsIf you’re wrong, game ends2324High-LowStates: 2, 3, 4, doneActions: High, LowModel: T(s, a, s’):P(s’=done | 4, High) = 3/4P(s’=2 | 4, High) = 0P(s’=3 | 4, High) = 0P(s’=4 | 4, High) = 1/4 P(s’=done | 4, Low) = 0P(s’=2 | 4, Low) = 1/2P(s’=3 | 4, Low) = 1/4P(s’=4 | 4, Low) = 1/4 …Rewards: R(s, a, s’):Number shown on s’ if s  s’0 otherwiseStart: 3Note: could choose actions with search. How?4MDP SolutionsIn deterministic single-agent search, want an optimal sequence of actions from start to a goalIn an MDP, like expectimax, want an optimal policy  (s)A policy gives an action for each stateOptimal policy maximizes expected utility (i.e. expected rewards) if followedDefines a reflex agentOptimal policy when R(s, a, s’) = -0.04 for all non-terminals sExample Optimal PoliciesR(s) = -2.0R(s) = -0.4R(s) = -0.03R(s) = -0.01StationarityIn order to formalize optimality of a policy, need to understand utilities of reward sequencesTypically consider stationary preferences:Theorem: only two ways to define stationary utilitiesAdditive utility:Discounted utility:Assuming that reward depends only on state for these slides!Infinite Utilities?!Problem: infinite state sequences with infinite rewardsSolutions:Finite horizon:Terminate after a fixed T stepsGives nonstationary policy ( depends on time left)Absorbing state(s): guarantee that for every policy, agent will eventually “die” (like “done” for High-Low)Discounting: for 0 <  < 1Smaller  means smaller horizonHow (Not) to Solve an MDPThe inefficient way:Enumerate policiesFor each one, calculate the expected utility (discounted rewards) from the start stateE.g. by simulating a bunch of runsChoose the best policyMight actually be reasonable for High-Low…We’ll return to a (better) idea like this laterUtility of a StateDefine the utility of a state under a policy:V(s) = expected total (discounted) rewards starting in s and following Recursive definition (one-step look-ahead):Policy EvaluationIdea one: turn recursive equations into updatesIdea two: it’s just a linear system, solve with Matlab (or Mosek, or Cplex)Example: High-LowPolicy: always say “high”Iterative updates:Example: GridWorld[DEMO]Q-FunctionsTo simplify things, introduce a q-value, for a state and action under a policyUtility of taking starting in state s, taking action a, then following  thereafterOptimal UtilitiesGoal: calculate the optimal utility of each stateV*(s) = expected (discounted) rewards with optimal actionsWhy: Given optimal utilities, MEU tells us the optimal policyPractice: Computing ActionsWhich action should we chose from state s:Given optimal q-values Q?Given optimal values V?The Bellman EquationsDefinition of utility leads to a simple relationship amongst optimal utility values:Optimal rewards = maximize over first action and then follow optimal policyFormally:Example: GridWorldValue IterationIdea:Start with bad guesses at all utility values (e.g. V0(s) = 0)Update all values simultaneously using the Bellman equation (called a value update or Bellman update):Repeat until convergenceTheorem: will converge to unique optimal valuesBasic idea: bad guesses get refined towards optimal valuesPolicy may converge long before values doExample: Bellman UpdatesExample: Value IterationInformation propagates outward from terminal states and eventually all states have correct value estimates[DEMO]Convergence*Define the max-norm:Theorem: For any two approximations U and VI.e. any distinct approximations must get closer to each other, so, in particular, any approximation must get closer to the true U and value iteration converges to a unique, stable, optimal solutionTheorem:I.e. one the change in our approximation is small, it must also be close to correctPolicy IterationAlternate approach:Policy evaluation: calculate utilities for a fixed policy until convergence (remember the beginning of lecture)Policy improvement: update policy based on resulting converged utilitiesRepeat until policy convergesThis is policy iterationCan converge faster under some conditionsPolicy IterationIf we have a fixed policy , use simplified Bellman equation to calculate utilities:For fixed utilities, easy to find the best action according to one-step look-aheadComparisonIn value iteration:Every pass (or “backup”) updates both policy (based on current utilities) and utilities (based on current policyIn policy iteration:Several passes to update utilitiesOccasional passes to update


View Full Document

Berkeley COMPSCI 188 - Lecture 9: MDPs

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture 9: MDPs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture 9: MDPs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 9: MDPs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?