1CS 188: Artificial IntelligenceFall 2011Lecture 8: Utilities / MDPs9/20/2011Dan Klein – UC BerkeleyMany slides over the course adapted from either Stuart Russell or Andrew Moore1Maximum Expected Utility Why should we average utilities? Why not minimax? Principle of maximum expected utility: A rational agent should chose the action which maximizes its expected utility, given its knowledge Questions: Where do utilities come from? How do we know such utilities even exist? Why are we taking expectations of utilities (not, e.g. minimax)? What if our behavior can’t be described by utilities?22Utilities: Uncertain Outcomes3Going to airport from homeGetSingleGetDoubleOops WhewPreferences An agent chooses among: Prizes: A, B, etc. Lotteries: situations with uncertain prizes Notation:43Rational Preferences Preferences of a rational agent must obey constraints. The axioms of rationality: Theorem: Rational preferences imply behavior describable as maximization of expected utility5MEU Principle Theorem: [Ramsey, 1931; von Neumann & Morgenstern, 1944] Given any preferences satisfying these constraints, there exists a real-valued function U such that: Maximum expected utility (MEU) principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities E.g., a lookup table for perfect tictactoe, reflex vacuum cleaner64Utility Scales Normalized utilities: u+= 1.0, u-= 0.0 Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc. QALYs: quality-adjusted life years, useful for medical decisions involving substantial risk Note: behavior is invariant under positive linear transformation With deterministic prizes only (no lottery choices), only ordinal utilitycan be determined, i.e., total order on prizes7Human Utilities Utilities map states to real numbers. Which numbers? Standard approach to assessment of human utilities: Compare a state A to a standard lottery Lpbetween “best possible prize” u+with probability p “worst possible catastrophe” u-with probability 1-p Adjust lottery probability p until A ~ Lp Resulting p is a utility in [0,1]85Money Money does not behave as a utility function, but we can talk about the utility of having money (or being in debt) Given a lottery L = [p, $X; (1-p), $Y] The expected monetary value EMV(L) is p*X + (1-p)*Y U(L) = p*U($X) + (1-p)*U($Y) Typically, U(L) < U( EMV(L) ): why? In this sense, people are risk-averse When deep in debt, we are risk-prone Utility curve: for what probability pam I indifferent between: Some sure outcome x A lottery [p,$M; (1-p),$0], M large9Example: Insurance Consider the lottery [0.5,$1000; 0.5,$0] What is its expected monetary value? ($500) What is its certainty equivalent? Monetary value acceptable in lieu of lottery $400 for most people Difference of $100 is the insurance premium There’s an insurance industry because people will pay to reduce their risk If everyone were risk-neutral, no insurance needed!106Example: Human Rationality? Famous example of Allais (1953) A: [0.8,$4k; 0.2,$0] B: [1.0,$3k; 0.0,$0] C: [0.2,$4k; 0.8,$0] D: [0.25,$3k; 0.75,$0] Most people prefer B > A, C > D But if U($0) = 0, then B > A ⇒ U($3k) > 0.8 U($4k) C > D ⇒ 0.8 U($4k) > U($3k)11Non-Deterministic Search12How do you plan when your actions might fail?7Example: Grid World The agent lives in a grid Walls block the agent’s path The agent’s actions do not always go as planned: 80% of the time, the action North takes the agent North (if there is no wall there) 10% of the time, North takes the agent West; 10% East If there is a wall in the direction the agent would have been taken, the agent stays put Small “living” reward each step Big rewards come at the end Goal: maximize sum of rewards*[DEMO – Gridworld Intro]Action Results14Deterministic Grid World Stochastic Grid WorldXXE N S WXE N S W?XX X8Markov Decision Processes An MDP is defined by: A set of states s ∈ S A set of actions a ∈ A A transition function T(s,a,s’) Prob that a from s leads to s’ i.e., P(s’ | s,a) Also called the model A reward function R(s, a, s’) Sometimes just R(s) or R(s’) A start state (or distribution) Maybe a terminal state MDPs are a family of non-deterministic search problems One way to solve them is with expectimax search – but we’ll have a new tool soon15What is Markov about MDPs? Andrey Markov (1856-1922) “Markov” generally means that given the present state, the future and the past are independent For Markov decision processes, “Markov” means:9Solving MDPs In deterministic single-agent search problems, want an optimal plan, or sequence of actions, from start to a goal In an MDP, we want an optimal policy π*: S → A A policy π gives an action for each state An optimal policy maximizes expected utility if followed Defines a reflex agent (if precomputed)Optimal policy when R(s, a, s’) = -0.03 for all non-terminals s[Demo]Example Optimal PoliciesR(s) = -2.0R(s) = -0.4R(s) = -0.03R(s) = -0.011810Example: High-Low Rules Three card types: 2, 3, 4 Infinite deck, twice as many 2’s Start with 3 showing After each card, you guess the next card will be “high” or “low” New card is flipped If you’re right, you win the points shown on the new card Ties are no-ops If you’re wrong, game ends How is this different from the “chance” games in last lecture? #1: get rewards as you go #2: you might play forever!319You can patch expectimaxto deal with #1, but not #2…High-Low as an MDP States: 2, 3, 4, done Actions: High, Low Model: T(s, a, s’): P(s’=4 | 4, Low) = 1/4 P(s’=3 | 4, Low) = 1/4 P(s’=2 | 4, Low) = 1/2 P(s’=done | 4, Low) = 0 P(s’=4 | 4, High) = 1/4 P(s’=3 | 4, High) = 0 P(s’=2 | 4, High) = 0 P(s’=done | 4, High) = 3/4 … Rewards: R(s, a, s’): Number shown on s’ if s ≠ s’ 0 otherwise Start: 3311High-Low: Outcome TreeLowHighHigh LowHigh LowHighLow, Low, HighT = 0.5, R = 2T = 0.25, R = 3T = 0, R = 4T = 0.25, R = 021MDP
View Full Document