DOC PREVIEW
CMU CS 10701 - Markov Decision Processes (MDPs)

This preview shows page 1-2-3-24-25-26-27-49-50-51 out of 51 pages.

Save
View full document
Premium Document
Do you want full access? Go Premium and unlock all 51 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Reading Kaelbling et al 1996 see class website Markov Decision Processes MDPs Machine Learning 10701 15781 Carlos Guestrin Carnegie Mellon University May 1st 2006 1 Announcements Project Poster session Friday May 5th 2 5pm NSH Atrium please arrive a little early to set up FCEs Please please please please please please give us your feedback it helps us improve the class http www cmu edu fce 2 Discount Factors People in economics and probabilistic decision making do this all the time The Discounted sum of future rewards using discount factor is reward now reward in 1 time step 2 reward in 2 time steps 3 reward in 3 time steps infinite sum 3 ount c s i me D 9 u s s A 0 r o t Fac The Academic Life 0 6 0 6 0 2 B Assoc Prof 60 A Assistant Prof 20 0 2 0 2 S On the Street 10 0 2 0 7 T Tenured Prof 400 D Dead 0 0 3 Define 0 7 0 3 VA Expected discounted future rewards starting in state A VB Expected discounted future rewards starting in state B VT T VS S VD D How do we compute VA VB VT VS VD 4 Computing the Future Rewards of an Academic 0 6 0 2 A Assistant Prof 20 0 2 S On the Street 10 0 7 0 6 B Assoc Prof 60 0 2 0 2 0 7 T Tenured Prof 400 D Dead 0 0 3 0 3 Assume Discount Factor 0 9 5 Joint Decision Space Markov Decision Process MDP Representation State space Action space Joint state x of entire system Joint action a a1 an for all agents Reward function Total reward R x a sometimes reward can depend on action Transition model Dynamics of the entire system P x x a 6 Policy Policy x a x0 At state x action a for all agents x0 both peasants get wood x1 x1 one peasant builds barrack other gets gold x2 x2 peasants get gold footmen attack 7 Value of Policy Expected longterm reward starting from x Value V x V x0 E R x0 R x1 2 R x2 3 R x3 4 R x4 L Start from x0 x0 x0 x1 x1 R x0 R x1 x1 R x1 x1 R x1 Future rewards discounted by 0 1 x1 x1 x2 R x2 x2 x3 x3 x4 R x3 R x4 8 Computing the value of a policy V x0 E R x0 R x1 2 R x2 3 R x3 4 R x4 L Discounted value of a state value of starting from x0 and continuing with policy from then on A recursion 9 Computing the value of a policy 1 the matrix inversion approach Solve by simple matrix inversion 10 Computing the value of a policy 2 iteratively If you have 1000 000 states inverting a 1000 000x1000 000 matrix is hard Can solve using a simple convergent iterative approach a k a dynamic programming Start with some guess V0 Iteratively say Vt 1 R P Vt Stop when Vt 1 Vt means that V Vt 1 1 11 But we want to learn a Policy So far told you how good a policy is But how can we choose the best policy Policy x a x0 x0 both peasants get wood x1 Suppose there was only one time step At state x action a for all agents x1 one peasant builds barrack other gets gold x2 x2 peasants get gold footmen attack world is about to end select action that maximizes reward 12 Another recursion Two time steps address tradeoff good reward now better reward in the future 13 Unrolling the recursion Choose actions that lead to best value in the long run Optimal value policy achieves optimal value V 14 Bellman equation Evaluating policy Computing the optimal value V Bellman equation V x max R x a P x x a V x a x 15 Optimal Long term Plan Optimal value function V x Optimal Policy x Q x a R x a P x x a V x x Optimal policy x arg max Q x a a 16 Interesting fact Unique value V x max R x a P x x a V x a Slightly surprising fact There is only one V that solves Bellman equation x there may be many optimal policies that achieve V Surprising fact optimal policies are good everywhere 17 Solving an MDP Solve Bellman equation Optimal value V x Optimal policy x V x max R x a P x x a V x a x Bellman equation is non linear Many algorithms solve the Bellman equations Policy iteration Howard 60 Bellman 57 Value iteration Bellman 57 Linear programming Manne 60 18 Value iteration a k a dynamic programming the simplest of all V x max R x a P x x a V x a x Start with some guess V0 Iteratively say Vt 1 x max R x a P x x a Vt x a x Stop when Vt 1 Vt means that V Vt 1 1 19 A simple example 0 9 1 You run a startup company In every state you must choose between Saving money or Advertising S 1 Poor Unknown 0 Poor Famous 0 1 2 A 1 2 1 2 1 2 1 2 1 2 S 1 A A Rich Unknown 10 S 1 2 1 2 A 1 2 S 1 2 Rich Famous 10 20 Let s compute Vt x for our example 0 9 1 S Poor Unknown 0 A 1 Poor Famous 0 1 2 1 2 1 2 1 2 1 2 A 1 1 2 1 2 S A A Rich Unknown 10 S 1 2 1 2 Vt PU Vt PF Vt RU Vt RF 1 2 3 4 5 6 S 1 2 t Rich Famous 10 Vt 1 x max R x a P x x a Vt x a x 21 Let s compute Vt x for our example 0 9 1 S Poor Unknown 0 A 1 Poor Famous 0 1 2 1 2 1 2 1 2 1 2 A S 1 1 2 1 2 1 2 S A A Rich Unknown 10 S 1 2 1 2 Rich Famous 10 t Vt PU Vt PF Vt RU Vt RF 1 2 3 4 5 6 0 0 2 03 3 852 7 22 10 03 0 4 5 6 53 12 20 15 07 17 65 10 14 5 25 08 29 63 32 00 33 58 10 19 18 55 19 26 20 40 22 43 Vt 1 x max R x a P x x a Vt x a x 22 Policy iteration Another approach for computing Start with some guess for a policy 0 Iteratively say evaluate policy Vt x R x a t x P x x a t x Vt x x improve policy t 1 x max R x a P x x a Vt x a x Stop when policy stops changing usually happens in about 10 …


View Full Document

CMU CS 10701 - Markov Decision Processes (MDPs)

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

16 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Markov Decision Processes (MDPs)
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Markov Decision Processes (MDPs) and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Markov Decision Processes (MDPs) and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?