U of M CS 5751 - Explanation Based Learning

Unformatted text preview:

CS 5751 Machine LearningChapter 11 Explanation-Based Learning1Explanation-Based Learning (EBL)One definition:Learning general problem-solving techniques by observing and analyzing human solutions to specific problems.CS 5751 Machine LearningChapter 11 Explanation-Based Learning2The EBL HypothesisBy understanding why an example is a member of a concept, can learn the essential properties of the conceptTrade-offthe need to collect many examplesforthe ability to “explain” single examples (a “domain” theory)CS 5751 Machine LearningChapter 11 Explanation-Based Learning3Learning by Generalizing ExplanationsGiven– Goal (e.g., some predicate calculus statement)– Situation Description (facts)– Domain Theory (inference rules)– Operationality CriterionUse problem solver to justify, using the rules, the goal in terms of the facts.Generalize the justification as much as possible.The operationality criterion states which other terms can appear in the generalized result.CS 5751 Machine LearningChapter 11 Explanation-Based Learning4Standard Approach to EBLgoalfactsAfter Learning (go directly from facts to solution):goalfactsAn Explanation (detailed proof of goal)CS 5751 Machine LearningChapter 11 Explanation-Based Learning5Unification-Based Generalization• An explanation is an inter-connected collection of “pieces” of knowledge (inference rules, rewrite rules, etc.)• These “rules” are connected using unification, as in Prolog• The generalization task is to compute the most general unifier that allows the “knowledge pieces” to be connected together as generally as possibleCS 5751 Machine LearningChapter 11 Explanation-Based Learning6The EGGS Algorithm (Mooney, 1986)bindings = { }FOR EVERY equality between patterns P and Q in explanation DObindings = unify(P,Q,bindings)FOR EVERY pattern P DOP = substitute-in-values(P,bindings)Collect leaf nodes and the goal nodeCS 5751 Machine LearningChapter 11 Explanation-Based Learning7Sample EBL ProblemInitial Domain Theoryknows(?x,?y) AND nice-person(?y) -> likes(?x,?y)animate(?z) -> knows(?z,?z)human(?u) -> animate(?u)friendly(?v) -> nice-person(?v)happy(?w) -> nice-person(?w)Specific ExampleGiven human(John) AND happy(John) AND male(John),show that likes(John,John)CS 5751 Machine LearningChapter 11 Explanation-Based Learning8Explanation to Solve Problemlikes(John,John)knows(John,John)animate(John)human(John)nice-person(John)happy(John)CS 5751 Machine LearningChapter 11 Explanation-Based Learning9Explanation Structurelikes(John,John)knows(?x,?y)animate(?z)human(?u)nice-person(?y)happy(?w)likes(?x,?y)knows(?z,?z)animate(?u)human(John) happy(John)nice-person(?w)Necessary Unifications:All variables must match ?zResulting Rule:human(?z) AND happy(?z) ->likes(?z,?z)CS 5751 Machine LearningChapter 11 Explanation-Based Learning10Prototypical EBL ArchitectureProblem Solver(Understander)GeneralizerKnowledgeBaseExplanationNew GeneralConcept(Partial)ExternalSolutionSpecificGoal/ProblemCS 5751 Machine LearningChapter 11 Explanation-Based Learning11Imperfect Theories and EBLIncomplete Theory ProblemCannot build explanations of specific problems because of missing knowledgeIntractable Theory ProblemHave enough knowledge, but not enough computer time to build specific explanationInconsistent Theory ProblemCan derive inconsistent results from a theory (e.g., because of default rules)CS 5751 Machine LearningChapter 11 Explanation-Based Learning12Some ComplicationsInconsistencies and Incompleteness may be due to abstractions and assumptions that make a theory tractable.Inconsistencies may arise from missing knowledge (incompleteness).e.g., making the closed-world assumptionCS 5751 Machine LearningChapter 11 Explanation-Based Learning13Issues with Imperfect TheoriesDetecting imperfections– “broken” explanations (missing clause)– contradiction detection (proving P and not P)– multiple explanations (but expected!)– resources exceededCorrecting imperfectionsexperimentation - motivated by failure type (explanation-based)make approximations/assumptions - assume something is trueCS 5751 Machine LearningChapter 11 Explanation-Based Learning14EBL as Operationalization (Speedup Learning)Assuming a complete problem solver and unlimited time, EBL already knows how to recognize all the concepts it will know.What it learns is how to make its knowledge operational (Mostow).Is this learning?Isn’t 99% of human learning of this type?CS 5751 Machine LearningChapter 11 Explanation-Based Learning15Knowledge-Level LearningNewell, DietterichKnowledge closure– all things that can be inferred from a collection of rules and facts“Pure” EBL only learns how to solve faster, not how to solve problems previously insoluble.Inductive learners make inductive leaps and hence can solve more after learning.What about considering resource-limits (e.g., time) on problem solving?CS 5751 Machine LearningChapter 11 Explanation-Based Learning16Negative Effects of Speedup LearningThe “Utility Problem”Time wasted checking “promising” rules– rules that almost match waste more time than obviously irrelevant onesGeneral, broadly-applicable rules mask more efficient special casesCS 5751 Machine LearningChapter 11 Explanation-Based Learning17Defining Utility (Minton)Utility = (AvgSav * ApplFreq) - AvgMatchCostwhereAvgSav - time saved when rule usedApplFreq - probability rule succeeds given its preconditions testedAvgMatchCost - cost of checking rule’s preconditionsRules with negative utility are discarded– estimated on training dataCS 5751 Machine LearningChapter 11 Explanation-Based Learning18Learning for Search-Based PlannersTwo options1. Save composite collections of primitive operators, called MACROPS• explanation turned into rule added to knowledge base2. Have domain theory about your problem solveruse explicit declarative representationbuild explanations about how problems were solved– which choices lead to failure, success, etc.– learn evaluation functions (prefer pursuing certain operations in certain situations)CS 5751 Machine LearningChapter 11 Explanation-Based Learning19Reasons for Control Rules• Improve search efficiency (prevent going down “blind alleys”)• To improve solution quality (don’t necessarily want first solution found via depth-first search)• To lead problem solver down seemingly unpromising paths– overcome default heuristics designed to keep problem solver from being overly combinatoricCS


View Full Document

U of M CS 5751 - Explanation Based Learning

Download Explanation Based Learning
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Explanation Based Learning and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Explanation Based Learning 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?