1CS 188: Artificial IntelligenceFall 2007Lecture 9: Utilities9/25/2007Dan Klein – UC BerkeleyAnnouncements Project 2 (due 10/1) SVN groups available, email us to request Midterm 10/16 in class One side of a page cheat sheet allowed (provided you write it yourself) Tell us NOW about conflicts!2Preferences An agent chooses among: Prizes: A, B, etc. Lotteries: situations with uncertain prizes Notation:Rational Preferences We want some constraints on preferences before we call them rational For example: an agent with intransitive preferences can be induced to give away all its money If B > C, then an agent with C would pay (say) 1 cent to get B If A > B, then an agent with B would pay (say) 1 cent to get A If C > A, then an agent with A would pay (say) 1 cent to get C3Rational Preferences Preferences of a rational agent must obey constraints. These constraints are the axioms of rationality Theorem: Rational preferences imply behavior describable as maximization of expected utilityMEU Principle Theorem: [Ramsey, 1931; von Neumann & Morgenstern, 1944] Given any preferences satisfying these constraints, there existsa real-valued function U such that: Maximum expected likelihood (MEU) principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities E.g., a lookup table for perfect tictactoe, reflex vacuum cleaner4Human Utilities Utilities map states to real numbers. Which numbers? Standard approach to assessment of human utilities: Compare a state A to a standard lottery Lpbetween ``best possible prize'' u+with probability p ``worst possible catastrophe'' u-with probability 1-p Adjust lottery probability p until A ~ Lp Resulting p is a utility in [0,1]Utility Scales Normalized utilities: u+= 1.0, u-= 0.0Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc.QALYs: quality-adjusted life years, useful for medical decisions involving substantial risk Note: behavior is invariant under positive linear transformation With deterministic prizes only (no lottery choices), only ordinal utilitycan be determined, i.e., total order on prizes5Example: Insurance Consider the lottery [0.5,$1000; 0.5,$0] What is its expected monetary value? ($500) What is its certainty equivalent? Monetary value acceptable in lieu of lottery $400 for most people Difference of $100 is the insurance premium There’s an insurance industry because people will pay to reduce their risk If everyone were risk-prone, no insurance needed!Money Money does not behave as a utility function Given a lottery L: Define expected monetary value EMV(L) Usually U(L) < U(EMV(L)) I.e., people are risk-averse Utility curve: for what probability pam I indifferent between: A prize x A lottery [p,$M; (1-p),$0] for large M? Typical empirical data, extrapolatedwith risk-prone behavior:6Example: Human Rationality? Famous example of Allais (1953) A: [0.8,$4k; 0.2,$0] B: [1.0,$3k; 0.0,$0] C: [0.2,$4k; 0.8,$0] D: [0.25,$3k; 0.75,$0] Most people prefer B > A, C > D But if U($0) = 0, then B > A ⇒ U($3k) > 0.8 U($4k) C > D ⇒ 0.8 U($4k) > U($3k)Reinforcement Learning [DEMOS] Basic idea: Receive feedback in the form of rewards Agent’s utility is defined by the reward function Must learn to act so as to maximize expected rewards Change the rewards, change the learned behavior Examples: Playing a game, reward at the end for winning / losing Vacuuming a house, reward for each piece of dirt picked up Automated taxi, reward for each passenger delivered7Markov Decision Processes An MDP is defined by: A set of states s ∈ S A set of actions a ∈ A A transition function T(s,a,s’) Prob that a from s leads to s’ i.e., P(s’ | s,a) Also called the model A reward function R(s, a, s’) Sometimes just R(s) or R(s’) A start state (or distribution) Maybe a terminal state MDPs are a family of non-deterministic search problems Reinforcement learning: MDPswhere we don’t know the transition or reward functionsSolving MDPs In deterministic single-agent search problem, want an optimal plan, or sequence of actions, from start to a goal In an MDP, we want an optimal policy π(s) A policy gives an action for each state Optimal policy maximizes expected if followed Defines a reflex agentOptimal policy when R(s, a, s’) = -0.04 for all non-terminals s8Example Optimal PoliciesR(s) = -2.0R(s) = -0.4R(s) = -0.03R(s) = -0.01Example: High-Low Three card types: 2, 3, 4 Infinite deck, twice as many 2’s Start with 3 showing After each card, you say “high” or “low” New card is flipped If you’re right, you win the points shown on the new card Ties are no-ops If you’re wrong, game ends Differences from expectimax: #1: get rewards as you go #2: you might play forever!23249High-Low States: 2, 3, 4, done Actions: High, Low Model: T(s, a, s’): P(s’=done | 4, High) = 3/4 P(s’=2 | 4, High) = 0 P(s’=3 | 4, High) = 0 P(s’=4 | 4, High) = 1/4 P(s’=done | 4, Low) = 0 P(s’=2 | 4, Low) = 1/2 P(s’=3 | 4, Low) = 1/4 P(s’=4 | 4, Low) = 1/4 … Rewards: R(s, a, s’): Number shown on s’ if s ≠ s’ 0 otherwise Start: 3Note: could choose actions with search. How?4Example: High-Low3HighLow243High LowHigh LowHighLow3, High, Low3T = 0.5, R = 2T = 0.25, R = 3T = 0, R = 4T = 0.25, R = 010MDP Search Trees Each MDP state gives an expectimax-like search treeass’s, a(s,a,s’) called a transitionT(s,a,s’) = P(s’|s,a)R(s,a,s’)s,a,s’s is a state(s, a) is a q-stateUtilities of Sequences In order to formalize optimality of a policy, need to understand utilities of sequences of rewards Typically consider stationary preferences: Theorem: only two ways to define stationary utilities Additive utility: Discounted utility:Assuming that reward depends only on state for these slides!11Infinite Utilities?! Problem: infinite sequences with infinite rewards Solutions: Finite horizon: Terminate after a fixed T steps Gives nonstationary policy (π depends on time left) Absorbing state(s): guarantee that for every
View Full Document