DOC PREVIEW
UMD CMSC 421 - Learning: Reinforcement Learning

This preview shows page 1-2-22-23 out of 23 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 23 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Learning: Reinforcement LearningProjectExample: Agent_with_PersonalityExample cont:Example: Robot NavigationLearning AgentScheduleReinforcement LearningSlide 9Slide 10FormalizationReactive Agent AlgorithmPolicy (Reactive/Closed-Loop Strategy)Slide 14ApproachesValue FunctionExplorationQ-LearningSlide 19Selecting an ActionExploration policySlide 22RL SummaryLearning: Reinforcement LearningRussell and Norvig: ch 21CMSC421 – Fall 2005ProjectTeams: 2-3 You should have emailed me your team members!Two components:Define EnvironmentLearning AgentExample: Agent_with_PersonalityState: mood: happy, sad, mad, bored sensor: smile, cry, glare, snoreAction: smile, hit, tell-joke, tickleDefine:S X A X S’ X P with probabilities and output stringDefineS X {-10,10}Example cont:State: happy (s0), sad (s1), mad (s2), bored (s3) smile (p0), cry(p1), glare(p2), snore (p3)Action: smile (a0), hit (a1), tell-joke (a2), tickle (a3)Define:S X A X S’ X P with probabilities and output stringi.e. 0 0 0 0 0.8 “It makes me happy when you smile” 0 0 2 2 0.2 “Argh! Quit smiling at me!!!” 0 1 0 0 0.1 “Oh, I’m so happy I don’t care if you hit me” 0 1 2 2 0.6 “HEY!!! Quit hitting me” 0 1 1 1 0.3 “Boo hoo, don’t be hitting me”DefineS X {-10,10}i.e. 0 10 1 -10 2 -5 3 0Example: Robot NavigationState: locationAction: forward, back, left, rightState -> Reward: define rewards of states in your gridState x Action -> State defined by movementsLearning AgentCalls Environment Program to get a training setOutputs a Q function:Q(S x A)We will evaluate the output of your learning program, by using it to execute and computing the reward given.ScheduleMonday, Dec. 5Electronically submit your environmentMonday, Dec. 12Submit your learning agentWednesday, Dec 13Submit your writeupReinforcement Learningsupervised learning is simplest and best-studied type of learninganother type of learning tasks is learning behaviors when we don’t have a teacher to tell us howthe agent has a task to perform; it takes some actions in the world; at some later point gets feedback telling it how well it did on performing taskthe agent performs the same task over and over againit gets carrots for good behavior and sticks for bad behavior called reinforcement learning because the agent gets positive reinforcement for tasks done well and negative reinforcement for tasks done poorlyReinforcement LearningThe problem of getting an agent to act in the world so as to maximize its rewards. Consider teaching a dog a new trick: you cannot tell it what to do, but you can reward/punish it if it does the right/wrong thing. It has to figure out what it did that made it get the reward/punishment, which is known as the credit assignment problem. We can use a similar method to train computers to do many tasks, such as playing backgammon or chess, scheduling jobs, and controlling robot limbs.Reinforcement Learningfor blackjackfor robot motionfor controllerFormalizationwe have a state space Swe have a set of actions a1, …, akwe want to learn which action to take at every state in the spaceAt the end of a trial, we get some reward, positive or negativewant the agent to learn how to behave in the environment, a mapping from states to actionsexample: Alvinnstate: configuration of the carlearn a steering action for each stateAccessible orobservable stateRepeat:s  sensed stateIf s is terminal then exita  choose action (given s)Perform aReactive Agent AlgorithmReactive Agent AlgorithmPolicy Policy (Reactive/Closed-Loop (Reactive/Closed-Loop Strategy)Strategy)• A policy  is a complete mapping from states to actions-1+12314321Repeat:s  sensed stateIf s is terminal then exita  (s)Perform aReactive Agent AlgorithmReactive Agent AlgorithmApproacheslearn policy directly– function mapping from states to actionslearn utility values for states, the value functionValue FunctionAn agent knows what state it is in and it has a number of actions it can perform in each state. Initially it doesn't know the value of any of the states. If the outcome of performing an action at a state is deterministic then the agent can update the utility value U() of a state whenever it makes a transition from one state to another (by taking what it believes to be the best possible action and thus maximizing): U(oldstate) = reward + U(newstate) The agent learns the utility values of states as it works its way through the state space.ExplorationThe agent may occasionally choose to explore suboptimal moves in the hopes of finding better outcomes. Only by visiting all the states frequently enough can we guarantee learning the true values of all the states. A discount factor is often introduced to prevent utility values from diverging and to promote the use of shorter (more efficient) sequences of actions to attain rewards. The update equation using a discount factor gamma is: U(oldstate) = reward + gamma * U(newstate) Normally gamma is set between 0 and 1.Q-Learningaugments value iteration by maintaining a utility value Q(s,a) for every action at every state. utility of a state U(s) or Q(s) is simply the maximum Q value over all the possible actions at that state.Q-Learningforeach state s foreach action a Q(s,a)=0 s=currentstate do forever a = select an action do action a r = reward from doing a t = resulting state from doing a Q(s,a) = (1 – alpha) Q(s,a) + alpha * (r + gamma * Q(t)) s = t Notice that a learning coefficient, alpha, has been introduced into the update equation. Normally alpha is set to a small positive constant less than 1.Selecting an Actionsimply choose action with highest expected utility?problem: action has two effectsgains reward on current sequenceinformation received and used in learning for future sequencestrade-off immediate good for long-term well-beingstuck in a rutjumping off a cliff just because you’ve never done it before…Exploration policywacky approach: act randomly in hopes of eventually exploring entire environmentgreedy approach: act to maximize utility using current estimateneed to find some balance: act more wacky when agent has little idea of environment and more greedy when the model is close to correctexample: one-armed bandits…RL Summaryactive area of researchboth in OR and AIseveral more sophisticated algorithms that we have not


View Full Document

UMD CMSC 421 - Learning: Reinforcement Learning

Download Learning: Reinforcement Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Learning: Reinforcement Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Learning: Reinforcement Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?