DOC PREVIEW
Berkeley COMPSCI 188 - Lecture Notes

This preview shows page 1-2-3-4-5 out of 14 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 14 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1CS 188: Artificial IntelligenceFall 2006Lecture 9: MDPs9/26/2006Dan Klein – UC BerkeleyReinforcement Learning [DEMOS] Basic idea: Receive feedback in the form of rewards Agent’s utility is defined by the reward function Must learn to act so as to maximize expected rewards Change the rewards, change the behavior Examples: Playing a game, reward at the end for winning / losing Vacuuming a house, reward for each piece of dirt picked up Automated taxi, reward for each passenger delivered2Markov Decision Processes Markov decision processes (MDPs) A set of states s ∈ S A model T(s,a,s’) = P(s’ | s,a) Probability that action a in state s leads to s’ A reward function R(s, a, s’) (sometimes just R(s) for leaving a state or R(s’) for entering one) A start state (or distribution) Maybe a terminal state MDPs are the simplest case of reinforcement learning In general reinforcement learning, we don’t know the model or the reward functionExample: High-Low Three card types: 2, 3, 4 Infinite deck, twice as many 2’s Start with 3 showing After each card, you say “high” or “low” New card is flipped If you’re right, you win the points shown on the new card Ties are no-ops If you’re wrong, game ends23243High-Low States: 2, 3, 4, done Actions: High, Low Model: T(s, a, s’): P(s’=done | 4, High) = 3/4 P(s’=2 | 4, High) = 0 P(s’=3 | 4, High) = 0 P(s’=4 | 4, High) = 1/4  P(s’=done | 4, Low) = 0 P(s’=2 | 4, Low) = 1/2 P(s’=3 | 4, Low) = 1/4 P(s’=4 | 4, Low) = 1/4  … Rewards: R(s, a, s’): Number shown on s’ if s ≠ s’ 0 otherwise Start: 3Note: could choose actions with search. How?4MDP Solutions In deterministic single-agent search, want an optimal sequence of actions from start to a goal In an MDP, like expectimax, want an optimal policy π(s) A policy gives an action for each state Optimal policy maximizes expected utility (i.e. expected rewards) if followed Defines a reflex agentOptimal policy when R(s, a, s’) = -0.04 for all non-terminals s4Example Optimal PoliciesR(s) = -2.0R(s) = -0.4R(s) = -0.03R(s) = -0.01Stationarity In order to formalize optimality of a policy, need to understand utilities of reward sequences Typically consider stationary preferences: Theorem: only two ways to define stationary utilities Additive utility: Discounted utility:Assuming that reward depends only on state for these slides!5Infinite Utilities?! Problem: infinite state sequences with infinite rewards Solutions: Finite horizon: Terminate after a fixed T steps Gives nonstationary policy (π depends on time left) Absorbing state(s): guarantee that for every policy, agent will eventually “die” (like “done” for High-Low) Discounting: for 0 < γ < 1 Smaller γ means smaller horizonHow (Not) to Solve an MDP The inefficient way: Enumerate policies For each one, calculate the expected utility (discounted rewards) from the start state E.g. by simulating a bunch of runs Choose the best policy Might actually be reasonable for High-Low… We’ll return to a (better) idea like this later6Utility of a State Define the utility of a state under a policy:Vπ(s) = expected total (discounted) rewards starting in s and following π Recursive definition (one-step look-ahead):Policy Evaluation Idea one: turn recursive equations into updates Idea two: it’s just a linear system, solve with Matlab (or Mosek, or Cplex)7Example: High-Low Policy: always say “high” Iterative updates:Example: GridWorld [DEMO]8Q-Functions To simplify things, introduce a q-value, for a state and action under a policy Utility of taking starting in state s, taking action a, then following π thereafterOptimal Utilities Goal: calculate the optimal utility of each stateV*(s) = expected (discounted) rewards with optimal actions Why: Given optimal utilities, MEU tells us the optimal policy9Practice: Computing Actions Which action should we chose from state s: Given optimal q-values Q? Given optimal values V?The Bellman Equations Definition of utility leads to a simple relationship amongst optimal utility values:Optimal rewards = maximize over first action and then follow optimal policy Formally:10Example: GridWorldValue Iteration Idea: Start with bad guesses at all utility values (e.g. V0(s) = 0) Update all values simultaneously using the Bellman equation (called a value update or Bellman update): Repeat until convergence Theorem: will converge to unique optimal values Basic idea: bad guesses get refined towards optimal values Policy may converge long before values do11Example: Bellman UpdatesExample: Value Iteration Information propagates outward from terminal states and eventually all states have correct value estimates[DEMO]12Convergence* Define the max-norm: Theorem: For any two approximations U and V I.e. any distinct approximations must get closer to each other, so, in particular, any approximation must get closer to the true U and value iteration converges to a unique, stable, optimal solution Theorem: I.e. one the change in our approximation is small, it must also be close to correctPolicy Iteration Alternate approach: Policy evaluation: calculate utilities for a fixed policy until convergence (remember the beginning of lecture) Policy improvement: update policy based on resulting converged utilities Repeat until policy converges This is policy iteration Can converge faster under some conditions13Policy Iteration If we have a fixed policy π, use simplified Bellman equation to calculate utilities: For fixed utilities, easy to find the best action according to one-step look-aheadComparison In value iteration: Every pass (or “backup”) updates both policy (based on current utilities) and utilities (based on current policy In policy iteration: Several passes to update utilities Occasional passes to update policies Hybrid approaches (asynchronous policy iteration): Any sequences of partial updates to either policy entries or utilities will converge if every state is visited infinitely often14Next Class In real reinforcement learning: Don’t know the reward function R(s,a,s’) Don’t know the model T(s,a,s’) So can’t do Bellman updates Need


View Full Document

Berkeley COMPSCI 188 - Lecture Notes

Documents in this Course
CSP

CSP

42 pages

Metrics

Metrics

4 pages

HMMs II

HMMs II

19 pages

NLP

NLP

23 pages

Midterm

Midterm

9 pages

Agents

Agents

8 pages

Lecture 4

Lecture 4

53 pages

CSPs

CSPs

16 pages

Midterm

Midterm

6 pages

MDPs

MDPs

20 pages

mdps

mdps

2 pages

Games II

Games II

18 pages

Load more
Download Lecture Notes
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture Notes and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture Notes 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?