DOC PREVIEW
LEHIGH CSE 335 - Axioms

This preview shows page 1-2-19-20 out of 20 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 20 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

AxiomsAxioms (II)Recap from Previous ClassUtility of A DecisionTwo Famous QuotesUtility FunctionPrinciple of Maximum Expected Utility (MEU)MEU Doesn’t Solve All AI ProblemsLotteriesPreferencesAxioms of the Utility TheoryAxioms of the Utility Theory (II)Axioms of the Utility Theory (III)ExampleExample (II)Example (III)Human Judgment and UtilityHuman Judgment and Utility (II)Human Judgment and Utility (III)HomeworkAxioms•Let W be statements known to be true in a domain•An axiom is a rule presumed to be true•An axiomatic set is a collection of axioms•Given an axiomatic set A, the domain theory of A, domTH(A) is the collection of all things that can be derived from AAxioms (II)•A problem frequently studied by mathematicians: Given Wcan we construct a (finite) axiomatic set, A, such that domTH(A) = W?•Potential difficulties:Inconsistency:Incompleteness:•Theorem (Goedel): Any axiomatic set for the Arithmetic is either inconsistent and/or incompleteW  domTh(A)domTh(A)  WRecap from Previous Class•First-order logic is not sufficient for many problems•We have only a degree of belief (a probability)Decision Theory = probability theory + utility theoryDecision Theory = probability theory + utility theory•Probability distribution•Expected value•Conditional probability•Axioms of probability•Bruno de Finetti’s TheoremToday!Utility of A DecisionCSE 395/495Resources:–Russell and Norwick’s bookTwo Famous Quotes“… so they go in a strange paradox, decided only to be undecided, resolved to be irresolute, adamant for drift, solid for fluidity all powerful to be impotent”Arnauld, 1692“To judge what one must due to obtain a good or avoid an evil, it is necessary to consider not only the good and the evil itself, but also the probability that it happens or not happen”Churchill, 1937Utility FunctionU: States  [0,)•The utility captures an agent’s preference•Given an action A, let Result1(A), Result2(A), … denote the possible outcomes of A•Let Do(A) indicates that action A is executed and E be the available evidence•Then, the expected utility EU(A|E): EU(A|E) = i P(Resulti(A) | E, Do(A)) U(Resulti(A))Principle of Maximum Expected Utility (MEU)An agent should choose an action that maximizes EUSuppose that taken actions update probabilities of states/actions. Which actions should be taken????1. Calculate probabilities of current state2. Calculate probabilities of the actions3. Select action with the highest expected utilityMEU says choose A for state S such that for any other action A’ if E is the known evidence in S, then EU(A|E)  EU(A’|E)MEU Doesn’t Solve All AI ProblemsDifficulties:EU(A|E) = i P(Resulti(A) | E, Do(A))U(Resulti(A))•State and U(Resulti(A)) might not be known completely•Computing P(Resulti(A) | E, Do(A)) requires a causal model. Computing it is NP-completeHowever:It is adequate if the utility function reflects the performance by which one’s behavior are judged Example? Grade vs. knowledgeLotteries•We will define the semantics of preferences to define the utility•Preferences are defined on scenarios, called lotteries•A lottery L with two possible outcomes: A with probability p and B with probability (1– p), written L =[p, A; (1– p), B]•The outcome of a lottery can be an state or another lotteryPreferencesLet A and B be states and/or lotteries, then:•A  B denotes A is preferred to B•A ~ B denotes A is indifferent to B•A  B denotes either A  B or A ~ BAxioms of the Utility Theory• Orderability•Transitivity•Continuity•SubstitutabilityA  B or B  A or A ~ BIf A  B and B  C then A  C If A  B  C then p exists such that[p, A; (1– p), C]If A ~ B then for any C[p, A; (1– p), C] ~~ B[p, B; (1– p), C]Axioms of the Utility Theory (II)•Monotonicity•Decomposibility (“No fun in gambling”)If A  B then p  q iff[p, A; (1– p), B] [q, A; (1– q), B] [p, A; (1– p), [q, B; (1– q), C] ] ~[p, A; (1– p)q, B ; (1– p)(1– q), C ]Axioms of the Utility Theory (III)•Utility principleA  B iffA ~ B iffU(A) > U(B)U(A) = U(B)•Maximum Expected Utility principleMEU([p1,S1; p2,S2; … ; pn,Sn]) =i piU(Si)U: States  [0,)ExampleSuppose that you are in a TV show and you have already earned 1’000.000 so far. Now, the presentator propose you a gamble: he will flip a coin if the coin comes up heads you will earn 3’000.000. But if it comes up tails you will loose the 1’000.000. What do you decide?First shot: U(winning $X) = XMEU([0.5,0; 0.5,3’000.000]) = 1’500.000This utility is called the expected monetary valueExample (II)If we use the expected monetary value of the lottery does it take the bet?Yes!, because: MEU([0.5,0; 0.5,3’000.000]) = 1’500.000 > MEU([1,1’000.000; 0,3’000.000]) = 1’000.000But is this really what you would do?Not me!Example (III)Second shot: Let S = “my current wealth” S’ = “my current wealth” + $1’000.000 S’’ = “my current wealth” + $3’000.000 MEU(Accept) = MEU(Decline) =0.5U(S) + 0.5U(S’’)U(S’)0.5U(S) + 0.5U(S’’)U(S’)If U(S) = 5, U(S’) = 8, U(S’’) = 10, would you accept the bet?No! = 7.5= 8$UHuman Judgment and Utility•Decision theory is a normative theory: describe how agents should act•Experimental evidence suggest that people violate the axioms of utility Tversky and Kahnerman (1982) and Allen (1953):Experiment with peopleChoice was given between A and B and then between C and D:A: 80% chance of $4000B: 100% chance of $3000C: 20% chance of $4000D: 25% chance of $3000Human Judgment and Utility (II)•Majority choose B over A and C over DIf U($0) = 0MEU([0.8,4000; 0.2,0]) =MEU([1,3000; 0,4000]) =0.8U($4000) U($3000)Thus, 0.8U($4000) < U($3000)MEU([0.2,4000; 0.8,0]) =MEU([0.25,3000; 0.65, 0]) =0.2U($4000) 0.25U($3000)Thus, 0.2U($4000) > 0.25U($3000)Thus, there cannot be no utility function consistent with these valuesHuman Judgment and Utility (III)•The point is that it is very hard to model an automatic agent that behaves like a human (back to the Turing test)•However, the utility theory does give some formal way of model decisions and as such is used to support user’s decisions•Same can be said for similarity in CBRHomeworkYou saw the discussion on the utility relative to the


View Full Document

LEHIGH CSE 335 - Axioms

Download Axioms
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Axioms and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Axioms 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?