UI CS 4420 - Arti icial Intelligence

Unformatted text preview:

{Logic and Uncertainty}{Uncertainty}{Reasoning under Uncertainty}{Handling Uncertain Knowledge}{Degrees of Belief}{Probability Theory}{Probability of Facts}{Conditional/Unconditional Probability}{Conditional Probabilities}{The Axioms of Probability}{Understanding Axiom 3}{Conditional Probabilities}{Random Variable}{Random Variables}{Probability Distribution}{Joint Probability Distribution}{Joint Probability Distribution}{Joint Probability Distribution}{An alternative to JPD: The Bayes Rule}{Applying the Bayes Rule}{Applying the Bayes Rule}{Conditional Independence}{Bayesian Networks}{Making Probabilistic Reasoning Feasible}{Review of Some Concepts}{Review of Some Concepts (2)}{Belief Networks}{Conditional Probability Tables}{A Belief Network with CPTs}{The Semantics of Belief Networks}{Belief Network as Joints}{Belief Network as Joints}{BN and Conditional Independence}{BN and Conditional Independence}{Constructing BN: General Procedure}{Ordering the Variables}{Ordering the Variables Right}{Locally Structured Systems}{BN and Locally Structured Systems}{Inference in Belief Networks}{Probabilistic Inference with BN}{Types of Inference in Belief Networks}C:145 Artificial Intelligence@UncertaintyReadings: ChapterRussell & Norvig.Artificial Intelligence – p.1/43Logic and UncertaintyOne problem with logical-agent approa c hes:Agents almost never have access to the whole truthabout their env ironm e nts.Very often, even in simple worlds, there are importantquestions for which there is n o b o o lea n answer.In that case, an agent must reason under uncertainty.Uncertainty also arises because of an agent’sincomplete or incorrect understanding of itsenvironment.Artificial Intelligence – p.2/43UncertaintyLet action Lt= “leave for airport t minutes before flight”.Will Ltget me there on time?Problemspartial observability (road state, other drivers’ plans, etc.)noisy sensors (unreliable traffic report s)uncertainty in action outco mes (flat tire, etc.)immense complexity of modelling and predicting trafficHence a pu r el y logical approach either1. risks falsehood (“A25will get me there on time”), or2. leads to conclusions that are too weak for decisionmaking (“A25will get me there on time if there’s noaccident on the way, it doesn’t rain, my tires remain intact,Artificial Intelligence – p.3/43Reasoning under UncertaintyA rational agent is one that makes rational dec isions (inorder to maximize its performance measure).A rational decision depend s onthe relative importance of various goals,the likelihood they will be achievedthe degree to which they will be achieve d .Artificial Intelligence – p.4/43Handling Uncertain KnowledgeReasons FOL-based approaches fail to cop e with do m a inslike medical diagno s is:Laziness: too much work to write com p lete axioms, ortoo hard to work with the enormou s sentenc e s thatresult.Theoretical Ignorance: The available knowledge of thedomain is incomplete.Practical Ignorance: The theoretical knowledge o f thedomain is complete but some evidential facts aremissing.Artificial Intelligence – p.5/43Degrees of BeliefIn several real-world d o m a ins the agent’s knowledgecan only provide a degree of belief in the releva n tsentences.The agent cannot say whethe r a sen ten c e is true, butonly that is tr ue x% o f the times.The main tool for handling degrees of b e lief is ProbabilityTheory.The use of probability summarizes the u n c e rtainty thatstems from our laziness or ignorance about the domain.Artificial Intelligence – p.6/43Probability TheoryProbability Theory makes the same ontologicalcommitments as FOL:Every sentence ϕ is either true or false.The degree of belief that ϕ is true is a number P between 0and 1.P (ϕ) = 1 −→ ϕ is certainly tru e.P (ϕ) = 0 −→ ϕ is certainly not true.P (ϕ) = 0.65 −→ ϕ is tru e with a 65% c hance.Artificial Intelligence – p.7/43Probability of FactsLet A be a propositional variable, a symbol denoting aproposition that is either tr u e o r false.P (A) denotes the probability that A is tr u e in theabsence of any o the r information.Similarly,P (¬A) = probability that A is falseP (A ∧ B) = probability that both A and B are trueP (A ∨ B) = probability that either A or B (or both) are tru eExamples:P (¬Blonde) P (Blonde ∧ BlueEyed) P (Blonde ∨ BlueEyed)Artificial Intelligence – p.8/43Conditional/Unconditional ProbabilityP (A) is the unconditional (or prior) probability of fact A.An agent can use unconditional p ro b ability of A to reasonabout A only in the absence of no furthe r information.If some furthe r eviden c e B becomes available, the agentmust use the conditional (or posterior) probability:P (A|B)the probability of A given that all the agent knows is B.Note: P (A) can be thought as the conditional probab ility ofA with respect to the empty evidence: P (A) = P (A| ).Artificial Intelligence – p.9/43Conditional ProbabilitiesThe probability of a fact may change as the agent acqu iresmore, or different, informa tion:1. P (Blonde) 2. P (Blonde|Swedish)3. P (Blonde|Kenian) 4. P (Blonde|Kenian ∧ ¬EuroD escent)1. If we know nothing about a person, the probability thathe/she is blon de equals a certain value, say 0.2.2. If we know that a person is Swedish the probability thats/he is blonde is much higher, say 0.9.3. If we know that the person is Kenyan, the probabilitys/he is blonde much lower, say 0.000003.4. If we know that the person is Kenyan and not ofEuropean descent, the probability s/he is blonde isbasically 0.Artificial Intelligence – p.10/43The Axioms of ProbabilityProbability Theory is governed by the following ax ioms:1. All probabilities are between 0 and 1.fo r all ϕ, 0 ≤ P (ϕ) ≤ 12. Valid propositions have pro bability 1. Unsatisfia blepropositions have prob a bility 0.P (α ∨ ¬α) = 1 P (α ∧ ¬α) = 03. The probability of disjunction is defined as follows.P (α ∨ β) = P (α) + P (β) − P (α ∧ β)Artificial Intelligence – p.11/43Understanding Axiom 3P (A ∨ B) = P (A) + P (B) − P (A ∧ B)Artificial Intelligence – p.12/43Conditional ProbabilitiesConditional probabilities are defined in terms ofunconditional ones, whenever P (B) > 0,P (A|B) =P (A ∧ B)P (B)The same thing can be written as the product rule:P (A ∧ B) = P (A|B)P (B)= P (B|A )P (A)A and B are independent iff P (A|B) = P (A) (orP (B|A) = P (B), or P (A ∧ B) = P (A)P (B)).Artificial Intelligence – p.13/43Random VariableA random variable is a variable


View Full Document
Download Arti icial Intelligence
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Arti icial Intelligence and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Arti icial Intelligence 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?