DOC PREVIEW
Berkeley COMPSCI 188 - MDPs

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

CS 188: Artificial Intelligence Spring 2006Reinforcement LearningMarkov Decision ProcessesMDP SolutionsExample Optimal PoliciesStationarityHow (Not) to Solve an MDPUtilities of StatesInfinite Utilities?!The Bellman EquationExample: Bellman EquationsValue IterationExample: Bellman UpdatesExample: Value IterationConvergence*Policy IterationPolicy EvaluationPolicy ImprovementComparisonNext ClassCS 188: Artificial IntelligenceSpring 2006Lecture 21: MDPs4/6/2006Dan Klein – UC BerkeleyReinforcement Learning[Demos]Basic idea:Receive feedback in the form of rewardsMust learn to act so as to maximize expected rewardsAgent’s utility is defined by the reward functionChange the rewards, change the behavior!Examples:Playing a game, reward at the end for winning / losingVacuuming a house, reward for each piece of dirt picked upAutomated taxi, reward for each passenger deliveredMarkov Decision ProcessesMarkov decision processes (MDPs)A set of states s  SA model T(s,a,s’) = P(s’ | s,a)Probability that action a in state s leads to s’A reward function R(s) (or R(s,a,s’) )MDPs are the simplest case of reinforcement learningIn general reinforcement learning, we don’t know the model or the reward functionMDP SolutionsIn state-space search, want an optimal sequence of actions from start to a goalIn an MDP, want an optimal policy (s)A policy gives an action for each stateOptimal policy is the one which maximizes expected utility (i.e. expected rewards) if followedGives a reflex agent!Optimal policy when R(s) = -0.04:Example Optimal PoliciesR(s) = -2.0R(s) = -0.4R(s) = -0.03R(s) = -0.01StationarityIn order to formalize optimality of a policy, need to understand utilities of reward sequencesTypically consider stationary preferences:Theorem: only two ways to define stationary utilitiesAdditive utility:Discounted utility:How (Not) to Solve an MDPThe inefficient way:Enumerate policiesCalculate the expected utility (discounted rewards) starting from the start stateE.g. by simulating a bunch of runsChoose the best policyWe’ll return to a (better) idea like this laterUtilities of StatesIdea: calculate the utility (value) of each stateU(s) = expected (discounted) sum of rewards assuming optimal actionsGiven the utilities of states, MEU tells us the optimal policyInfinite Utilities?!Problem: infinite state sequences with infinite rewardsSolutions:Finite horizon:Terminate after a fixed T stepsGives nonstationary policy ( depends on time left)Absorbing state(s): guarantee that for every policy, agent will eventually “die”Discounting: for 0 <  < 1Smaller  means smaller horizonThe Bellman EquationDefinition of state utility leads to a simple relationship amongst utility values:Expected rewards = current reward +  x expected sum of rewards after taking best actionFormally:Example: Bellman EquationsValue IterationIdea:Start with bad guesses at utility values (e.g. U0(s) = 0)Update using the Bellman equation (called a value update or Bellman update):Repeat until convergenceTheorem: will converge to unique optimal valuesBasic idea: bad guesses get refined towards optimal valuesPolicy may converge before values doExample: Bellman UpdatesExample: Value IterationInformation propagates outward from terminal states and eventually all states have correct value estimates[DEMO]Convergence*Define the max-norm:Theorem: For any two approximations U and VI.e. any distinct approximations must get closer to each other, so, in particular, any approximation must get closer to the true U and value iteration converges to a unique, stable, optimal solutionTheorem:I.e. one the change in our approximation is small, it must also be close to correctPolicy IterationAlternate approach:Policy evaluation: calculate utilities for a fixed policyPolicy improvement: update policy based on resulting utilitiesRepeat until convergenceThis is policy iterationCan converge faster under some conditionsPolicy EvaluationIf we have a fixed policy , use simplified Bellman equation to calculate utilities:Policy ImprovementFor fixed utilities, easy to find the best action according to one-step lookaheadComparisonIn value iteration:Every pass (or “backup”) updates both policy (based on current utilities) and utilities (based on current policyIn policy iteration:Several passes to update utilitiesOccasional passes to update policiesHybrid approaches (asynchronous policy iteration):Any sequences of partial updates to either policy entries or utilities will converge if every state is visited infinitely oftenNext ClassIn real reinforcement learning:Don’t know the reward function R(s)Don’t know the model T(s,a,s’)So can’t do Bellman updates!Need new techniques:Q-learningModel learningAgents actually have to interact with the environment rather than simulate


View Full Document

Berkeley COMPSCI 188 - MDPs

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download MDPs
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view MDPs and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view MDPs 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?