DOC PREVIEW
CMU CS 10701 - Lecture

This preview shows page 1-2-3-4-5 out of 16 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 16 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

1Eric Xing 1Machine LearningMachine Learning1010--701/15701/15--781, Spring 2008781, Spring 2008Reinforcement learning 1Reinforcement learning 1Eric XingEric XingLecture 27, April 28, 2008Reading: Chap. 13, T.M. bookstart3211234+1-1Eric Xing 2Outlinez Intro to reinforcement learningz MDP: Markov decision problemz Dynamic programming:z Value iterationz Policy iteration2Eric Xing 3What is Learning?z Learning takes place as a result of interaction between an agent and the world, the idea behind learning is thatz Percepts received by an agent should be used not only for understanding/interpreting/prediction, as in the machine learning tasks we have addressed so far, but also for acting, and further more for improving the agent’s ability to behave optimally in the future to achieve the goal.Eric Xing 4Types of Learning z Supervised Learningz A situation in which sample (input, output) pairs of the function to be learned can be perceived or are givenz You can think it as if there is a kind teacher- Training data: (X,Y). (features, label)- Predict Y, minimizing some loss.- Regression, Classification.z Unsupervised Learning- Training data: X. (features only)- Find “similar” points in high-dim X-space.- Clustering.3Eric Xing 5Example of Supervised Learning z Predict the price of a stock in 6 months from now, based on economic data. (Regression)z Predict whether a patient, hospitalized due to a heart attack, will have a second heart attack. The prediction is to be based on demographic, diet and clinical measurements for that patient. (Logistic Regression)z Identify the numbers in a handwritten ZIP code, from a digitized image (pixels). (Classification)Eric Xing 6Example of Unsupervised Learningz From the DNA micro-array data, determine which genes are most “similar”in terms of their expression profiles. (Clustering)4Eric Xing 7Types of Learning (Cont’d) z Reinforcement Learningz in the case of the agent acts on its environment, it receives some evaluation of its action (reinforcement), but is not told of which action is the correct one to achieve its goal- Training data: (S, A, R). (State-Action-Reward)- Develop an optimal policy (sequence of decision rules) for the learner so as to maximize its long-term reward. - Robotics, Board game playing programs.Eric Xing 8RL is learning from interaction5Eric Xing 9Examples of Reinforcement Learning z How should a robot behave so as to optimize its “performance”? (Robotics)z How to automate the motion of a helicopter? (Control Theory)z How to make a good chess-playing program? (Artificial Intelligence)Eric Xing 10Robot in a roomz what’s the strategy to achieve max reward?z what if the actions were deterministic?6Eric Xing 11History of Reinforcement Learningz Roots in the psychology of animal learning (Thorndike,1911).z Another independent thread was the problem of optimal control, and its solution using dynamic programming(Bellman, 1957).z Idea of temporal difference learning (on-line method), e.g., playing board games (Samuel, 1959).z A major breakthrough was the discovery of Q-learning (Watkins, 1989).Eric Xing 12What is special about RL?z RL is learning how to map states to actions, so as to maximize a numerical reward over time.z Unlike other forms of learning, it is a multistage decision-making process (often Markovian).z An RL agent must learn by trial-and-error. (Not entirely supervised, but interactive)z Actions may affect not only the immediate reward but also subsequent rewards (Delayed effect).7Eric Xing 13Elements of RLz A policy- A map from state space to action space.- May be stochastic.z A reward function- It maps each state (or, state-action pair) toa real number, called reward. z A value function- Value of a state (or, state-action pair) is thetotal expected reward, starting from that state (or, state-action pair).Eric Xing 14Policy8Eric Xing 15Reward for each step -2Eric Xing 16Reward for each step: -0.19Eric Xing 17Reward for each step: -0.04Eric Xing 18The Precise Goalz To find a policy that maximizes the Value function.z transitions and rewards usually not availablez There are different approaches to achieve this goal in various situations.z Value iteration and Policy iteration are two more classic approaches to this problem. But essentially both are dynamic programming.z Q-learning is a more recent approaches to this problem. Essentially it is a temporal-difference method.10Eric Xing 19Markov Decision ProcessesA Markov decision process is a tuple where:Eric Xing 20The dynamics of an MDPz We start in some state s0, and get to choose some action a0∈Az As a result of our choice, the state of the MDP randomly transitions to some successor state s1, drawn according to s1~Ps0a0z Then, we get to pick another action a1z…11Eric Xing 21The dynamics of an MDP, (Cont’d)z Upon visiting the sequence of states s0, s0, …, with actions a0, a0, …, our total payoff is given byz Or, when we are writing rewards as a function of the states only, this becomesz For most of our development, we will use the simpler state-rewards R(s), though the generalization to state-action rewards R(s; a) offers no special diculties.z Our goal in reinforcement learning is to choose actions over time so as to maximize the expected value of the total payoff:Eric Xing 22Policyz A policy is any function mapping from the states to the actions.z We say that we are executing some policy if, whenever we are in state s, we take action a = π(s).z We also define the value function for a policy πaccording toz Vπ(s) is simply the expected sum of discounted rewards upon starting in state s, and taking actions according to π.12Eric Xing 23Value Functionz Given a fixed policy π, its value function Vπsatisfies the Bellman equations:z Bellman's equations can be used to efficiently solve for Vπ(see later)Immediate rewardexpected sum offuture discounted rewardsEric Xing 24M = 0.8 in direction you want to go0.2 in perpendicular 0.1 left0.1 rightPolicy: mapping from states to actions3211234+1-10.7053211234+1-10.8120.7620.868 0.9120.6600.655 0.611 0.388An optimal policy for the stochastic environment:utilities of states:EnvironmentObservable (accessible): percept identifies the statePartially observableMarkov property: Transition probabilities depend on state only, not on the path to the state.Markov decision problem (MDP).Partially observable MDP (POMDP): percepts does not have enough info to identify transition probabilities.The Grid


View Full Document

CMU CS 10701 - Lecture

Documents in this Course
lecture

lecture

12 pages

lecture

lecture

17 pages

HMMs

HMMs

40 pages

lecture

lecture

15 pages

lecture

lecture

20 pages

Notes

Notes

10 pages

Notes

Notes

15 pages

Lecture

Lecture

22 pages

Lecture

Lecture

13 pages

Lecture

Lecture

24 pages

Lecture9

Lecture9

38 pages

lecture

lecture

26 pages

lecture

lecture

13 pages

Lecture

Lecture

5 pages

lecture

lecture

18 pages

lecture

lecture

22 pages

Boosting

Boosting

11 pages

lecture

lecture

16 pages

lecture

lecture

20 pages

Lecture

Lecture

20 pages

Lecture

Lecture

39 pages

Lecture

Lecture

14 pages

Lecture

Lecture

18 pages

Lecture

Lecture

13 pages

Exam

Exam

10 pages

Lecture

Lecture

27 pages

Lecture

Lecture

15 pages

Lecture

Lecture

24 pages

Lecture

Lecture

16 pages

Lecture

Lecture

23 pages

Lecture6

Lecture6

28 pages

Notes

Notes

34 pages

lecture

lecture

15 pages

Midterm

Midterm

11 pages

lecture

lecture

11 pages

lecture

lecture

23 pages

Boosting

Boosting

35 pages

Lecture

Lecture

49 pages

Lecture

Lecture

22 pages

Lecture

Lecture

18 pages

Lecture

Lecture

35 pages

lecture

lecture

22 pages

lecture

lecture

24 pages

Midterm

Midterm

17 pages

exam

exam

15 pages

Lecture12

Lecture12

32 pages

lecture

lecture

19 pages

Lecture

Lecture

32 pages

boosting

boosting

11 pages

pca-mdps

pca-mdps

56 pages

bns

bns

45 pages

mdps

mdps

42 pages

svms

svms

10 pages

Notes

Notes

12 pages

lecture

lecture

42 pages

lecture

lecture

29 pages

lecture

lecture

15 pages

Lecture

Lecture

12 pages

Lecture

Lecture

24 pages

Lecture

Lecture

22 pages

Midterm

Midterm

5 pages

mdps-rl

mdps-rl

26 pages

Load more
Download Lecture
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Lecture and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Lecture 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?