Knowledge-Based SystemsAnnouncements • Review sessions • CS 4701 – focus on AISchedule • Search • Machine learning • Knowledge based systems • DiscoveryHistory of AI 1943 – 1969 The Beginnings 1943 McCulloch and Pitts show networks of neurons can compute and learn any function 1950 Shannon and Turing wrote chess programs 1951 Minsky and Edmonds build the first neural network computer (SNARC) 1956 Dartmouth Conference – Newell and Simon brought a reasoning program “The Logic Theorist” which proved theorems. 1952 Samuel’s checkers player 1958 McCarthy designed LISP, helped invent time-sharing and created Advice Taker (a domain independent reasoning system) 1960’s Microworlds – solving limited problems: SAINT (1963), ANALOGY (1968), STUDENT (1967), blocksworld invented. 1962 Perceptron Convergence Theorem is proved.1952 Samuel’s checkers player o TVArthur Samuel (1901-1990)Example ANALOGY ProblemBlocksworldHistory of AI 1966 – 1974 Recognizing Lack of Knowledge • Herb Simon (1957): Computer chess program will be world chess champion within 10 years. • Intractable problems, lack of computing power (Lighthill Report, 1973) • Machine translation • Limitations in knowledge representation (Minsky and Papert, 1969) Knowledge-poor programsKnowledge Representation • Human intelligence relies on a lot of background knowledge – the more you know, the easier many tasks become – ”knowledge is power” – E.g. SEND + MORE = MONEY puzzle. • Natural language understanding – Time flies like an arrow. – Fruit flies like a banana. – John saw the diamond through the window and coveted it – John threw the brick through the window and broke it – The spirit is willing but the flesh is weak. (English) – The vodka is good but the meat is rotten. (Russian) • Or: Plan a trip to L.A.Domain knowledge • How did we encode domain knowledge so far? –For search problems? –For learning problems?Knowledge-Based Systems/Agents • Key components: – Knowledge base: a set of sentences expressed in some knowledge representation language – Inference/reasoning mechanisms to query what is known and to derive new information or make decisions. • Natural candidate: – logical language (propositional/first-order) – combined with a logical inference mechanism • How close to human thought? – In any case, appears reasonable strategy for machines.1234Example: Autonomous Car State: k-tuple (PersonInFrontOfCar, Policeman, Policecar, Slippery, YellowLight, RedLight) Actions: Brake, Accelerate, TurnLeft, etc. Knowledge-base describing when the car should brake: ( PersonInFrontOfCar Brake ) ((( YellowLight Policeman ) (Slippery )) Brake ) ( Policecar Policeman ) ( Snow Slippery ) ( Slippery Dry ) ( RedLight Brake ) Does (Policecar, YellowLight, Snow) imply Brake? A=Yes B= NoWhat the computer “sees”: State: k-tuple (x1, x2, x3, x4, x5, x6, x7) Actions: x8, x9, x10, etc. Knowledge-base describing when x: ( x1 x8 ) ((( x5 x2 ) (x4 )) x8) ( x3 x2 ) ( x7 x4 ) ( x4 x11 ) ( x6 x8) Does (x3, x5, x7) imply x8? A=Yes B= NoLogic as a Knowledge Representation • Components of a Formal Logic: – Variables and operators, syntax – semantics (link to the world, truth in worlds) – logical reasoning: entailment = • if, in every model in which α is true, β is also true. – inference algorithm derives • KB α, i.e., α is derived from KB. (x+y=4) entails that A) x=2 y=2 B) 2x+2y=8 C) Neither D) BothModels • Model is an instantiation of all variables • All models = all possible assignments • Sentence α is true in model m, then m is a model of α • M(α) refers to the set of all models that satisfy α • α = β iff M(α) M(β) • β iff M(α) is contained in M(β)Possible models for the presence of pits in [1,2] [2,2] [3,1] Dashed = M(α1) where α1= P1,2 (no pit in [1,2]) Solid = M(KB) with observation of B1,1 B2,1 (no breeze in [1,1] and breeze in [2,1])Possible models for the presence of pits in [1,2] [2,2] [3,1] Dashed = M(α2) where α2= P2,2 (no pit in [2,2]) Solid = M(KB) with observation of B1,1 B2,1 (no breeze in [1,1] and breeze in [2,1])Soundness and Completeness Soundness: An inference algorithm that derives only entailed sentences is called sound or truth-preserving. KB α implies KB = α Completeness: An inference algorithm is complete if it can derive any sentence that is entailed. KB = α implies KB α Why soundness and completeness important? Allow computer to ignore semantics and “just push symbols”!AE Duncan-Jones - 1935Entailment vs. Implication • Entailment (KB = α) and implication (KB α) can be treated equivalently if the inference process is sound and complete.Propositional Logic: Syntax • Propositional Symbols – A, B, C, … • Connectives – , , , , • Sentences – Atomic Sentence: True, False, Propositional Symbol – Complex Sentence: • (Sentence ) • ( Sentence V Sentence ) • ( Sentence Sentence ) • ( Sentence Sentence ) • ( Sentence Sentence ) • A KB is a conjunction (ANDs) of many sentencesExample: Autonomous Car Propositional Symbols PersonInFrontOfCar, Policeman, .. Brake, Accelerate, TurnLeft Rules: ( PersonInFrontOfCar Brake ) ((( YellowLight Policeman ) (Slippery )) Brake ) ( Policecar Policeman ) ( Snow Slippery ) ( Slippery Dry ) ( RedLight Brake ) from sensors: YellowLight RedLight Snow Dry Policecar PersonInFrontOfCar Initial KB Added to KBPropositional Logic: Semantics • Model (i.e. possible world): – Assignment of truth values to symbols – Example: m={P=True , Q=False} • Note: Often called “assignment” instead of “model”, and “model” is used for an assignment that evaluates to true. • Validity: – A sentence is valid, if it is true in every model. • Satisfiability: – A sentence is satisfiable, if it is true in at least one model. • Entailment: – = if and only if, in every model in which is true, is also true. ModelsStay at home • Sick StayAtHome • true true • false false • false true Does Sick entail StayAtHome? A=Yes B=NoPuzzling aspects of
View Full Document