View Full Document

A Tutorial on the Partially Observable Markov



View the full content.
View Full Document
View Full Document

15 views

Unformatted text preview:

A Tutorial on the Partially Observable Markov Decision Process and Its Applications Lawrence Carin June 7 2006 Outlin e Overview of Markov decision Processes MDPs Introduction to partially observable decision processes POMDPs Some applications of POMDPs Overview of MDPs Introduction to POMDPs model Some applications of POMDPs Conclusions Markov decision processes The MDP is defined by the tuple S A T R S is a finite set of states of the world A is a finite set of actions T S A S is the state transition function the probability of an action changing the the world state from one to another T s a s R S A is the reward for the agent in a given world state after performing an action R s a Markov decision processes Two properties of the MDP The action dependent state transition is Markovian The state is fully observable after taking Illustration of MDPs action a AGENT s a Action a State s WORLD T s a s Markov decision processes Objective of MDPs Finding the optimal policy mapping state s to action a in order to maximize the value function V s V s max R s a T s a s V s a s Overview of MDPs Introduction to POMDPs Some applications of POMDPs Conclusions Introduction to POMDPs The POMDP is defined by the tuple S A T R O S A T and R are defined the same as in MDPs is a finite set of observations the agent can experience its world O S A is the observation function the probability of making a certain observation after performing a particular action landing in state s O s a o Introduction to POMDPs Differences between MDPs and POMDPs The state is hidden after taking action a The hidden state information is inferred from the action state dependent observation function O s a o Uncertainty of state s in POMDPs Introduction to POMDPs A new concept in POMDPs Belief State b sb s Pr s s o a o a o a o t t 1 1 2 2 t 1 t 1 t Introduction to POMDPs The belief state b s evolves according to Bayes rule b s O s a o T s a s b s Pr o a b s1 b s2 1 s S s1 o1 o2 s3 n control interval remaining b T b a o1 s2 b T b a o2 s3 n 1 control interval remaining Introduction to POMDPs Illustration of POMDPs T s a s O s a o SE b b b b a AGENT SE State Estimator using 1 Policy Search Action a Observation o WORLD Introduction to POMDPs Objective of POMDPs Finding the optimal policy for POMDPs mapping belief point b to action a in order to maximize the value function V b Vn b max b s Pr s s a Pr o s a Rssa o Vn 1 bao s a A s S o s S 2 max b s R s a b s Pr s s a Pr o s a Vn 1 bao s a A s S s S o s S Expected immediate reward Introduction to POMDPs Piecewise linearity and convexity of optimal value function for finite horizon in POMDPs Vn b max k nk s b s max k nk b s S V b p1 p2 Optimal value function p3 p4 p5 0 a p1 a p2 a p5 Pr S1 1 3 Introduction to POMDPs Substituting 3 1 into 2 Vn b max b s R s a Pr o b a Vn 1 b a A o s S Maximizing to obtain the index l max b s R s a max b s T s s a o s a nk 1 s a A k o s s s S l b a o max b s R s a T s s a o s a n 1 s a A o s s S vector of belief point b Optimal value of belief point b 4 Introduction to POMDPs Approaches to solving POMDPs problem Exact algorithms finding all vectors for the whole belief space which is exact but intractable for large size problems Approximate algorithms finding vectors of a subset of the belief space which is fast and can deal with large size problems Point Based Value Iteration Point based value iteration PBVI b0 b1 b3 b4 b5 focus on a finite set of belief points maintain an vector for each point Region Based Value Iteration RBVI RBVI maintains an vector for each convex region over which the optimal value function is linear RBVI simultaneously determines the vectors for all relevant convex regions based on all available belief points RBVI Contd The piecewise linear value function which can be reformulated as by introducing hidden variables z b k denoting b Bk RBVI Contd The belief space is partitioned using hyperellipsoids Then we have RBVI Contd The joint distribution of V b and b can be written as where Expectation Maximization EM Estimation E step M step Overview of MDPs Introduction to POMDPs model Some applications of POMDPs Conclusions Applications of POMDPs Application of Partially Observable Markov Decision Processes to robot navigation in a Minefield Application of Partially Observable Markov Decision Processes to feature selection Application of Partially Observable Markov Decision Processes to sensor scheduling Applications of POMDPs Some considerations in applying POMDPs to new problems How to define the state How to obtain the transition and observation matrix How to set the reward References 1 Leslie Pack Kaelbling Michael L Littman and Anthony R Cassandra Planning and Acting in Partially Observable Stochastic Domains Artificial Intelligence Vol 101 1998 2 Smallwood R D and Sondik E J 1973 The optimal control of partially observable markov processes over a finite horizon Operational Research 21 1071 1088 3 J Pineau G Gordon S Thrun Point based value iteration An anytime algorithm for POMDPs International Joint Conference on Artificial Intelligence IJCAI Acapulco Mexico Aug 2003 4 D P Bertsekas Dynamic Programming and Optimal Control Athena Scientific Blemont Massachusetts 2001 Vol 1 Vol 2 5 Bellman R 1957 Dynamic Programming Princeton University Press


Access the best Study Guides, Lecture Notes and Practice Exams

Loading Unlocking...
Login

Join to view A Tutorial on the Partially Observable Markov and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view A Tutorial on the Partially Observable Markov and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?