DOC PREVIEW
UMD CMSC 421 - Learning - Decision Trees

This preview shows page 1-2-3-4-5-32-33-34-35-65-66-67-68-69 out of 69 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 69 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Learning - Decision TreesLearning AgentTypes of LearningSupervised LearningSlide 6Logic-Inference FormulationRewarded Card ExampleSlide 11Slide 13A Possible Training SetSlide 15Example setHypothesis SpaceInductive Learning SchemeSize of the Hypothesis SpaceMultiple Inductive HypothesesKeep-It-Simple (KIS) BiasPredicate-Learning MethodsDecision TreePredicate as a Decision TreeSlide 27Training SetPossible Decision TreeSlide 30Getting StartedSlide 32Slide 33Assume It’s AAssume It’s BAssume It’s CAssume It’s DAssume It’s EChoice of Second PredicateChoice of Third PredicateFinal TreeLearning a Decision TreeUsing Information Theory# of Questions to Identify an ObjectSlide 51Slide 52Information Content of an AnswerApplication to Decision TreeSlide 55Slide 56Slide 57Miscellaneous IssuesSlide 59Slide 60Slide 61Applications of Decision TreeSlide 63SummaryLearning II: Neural NetworksNeural functionBiology of a neuronBrain structureComparison of computing powerNeural networksNeural unitModel NeuronNeural ComputationLearning RulesPerceptron Learning rulePerceptron Learning AlgorithmPowerPoint PresentationRepresentation Limitations of a PerceptronPerceptron LearnabilityLayered feed-forward network“Executing” neural networksLearning - Decision Learning - Decision TreesTreesRussell and Norvig: Chapter 18, Sections 18.1 through 18.4CMSC 421 – Fall 2002material from Jean-Claude Latombe and Daphne KollerLearning AgentLearning Agentenvironmentagent?sensorsactuatorsProblemsolverLearningelementCriticPerceptsActionsKBPerformancestandardTypes of LearningTypes of Learning Supervised Learning - classification, predictionUnsupervised Learning – clustering, segmentation, pattern discoveryReinforcement Learning – learning MDPs, online learningSupervised LearningSupervised Learning A general framework Logic-based/discrete learning:learn a function f(X)  (0,1)Decision treesVersion space method Probabilistic/Numeric learninglearn a function f(X)  RNeural netsSupervised LearningSupervised LearningSomeone gives you a bunch of examples, telling you what each one isEventually, you figure out the mapping from properties (features) of the examples and their typeLogic-Inference Logic-Inference FormulationFormulation Background knowledge KB Training set  (observed knowledge) such that KB  Inductive inference: Find h(inductive hypothesis) such thatKB and h are consistentKB,h Unlike in the function-learning formulation, h must be a logical sentence, but its inference may benefit from the background knowledgeNote that h =  is a trivial,but uninteresting solution (data caching)Rewarded Card ExampleRewarded Card ExampleDeck of cards, with each card designated by [r,s], its rank and suit, and some cards “rewarded”Background knowledge KB: ((r=1) v … v (r=10))  NUM(r)((r=J) v (r=Q) v (r=K))  FACE(r)((s=S) v (s=C))  BLACK(s)((s=D) v (s=H))  RED(s)Training set :REWARD([4,C])  REWARD([7,C])  REWARD([2,S])  REWARD([5,H])  REWARD([J,S])Rewarded Card ExampleRewarded Card ExampleBackground knowledge KB: ((r=1) v … v (r=10))  NUM(r)((r=J) v (r=Q) v (r=K))  FACE(r)((s=S) v (s=C))  BLACK(s)((s=D) v (s=H))  RED(s)Training set :REWARD([4,C])  REWARD([7,C])  REWARD([2,S])  REWARD([5,H])  REWARD([J,S])Possible hypothesis:h  (NUM(r)  BLACK(s)  REWARD([r,s]))There are several possible inductive hypothesesLearning a PredicateLearning a Predicate Set E of objects (e.g., cards) Goal predicate CONCEPT(x), where x is an object in E, that takes the value True or False (e.g., REWARD) Observable predicates A(x), B(X), … (e.g., NUM, RED) Training set: values of CONCEPT for some combinations of values of the observable predicatesA Possible Training SetA Possible Training SetEx. # A B C D E CONCEPT1True True False True False False2True False False False False True3False False True True True False4True True True False True True5False True True False False False6True True False True True False7False False True False True False8True False True False True True9False False False True True False10True True True True False TrueNote that the training set does not say whether an observable predicate A, …, E is pertinent or notLearning a PredicateLearning a Predicate Set E of objects (e.g., cards) Goal predicate CONCEPT(x), where x is an object in E, that takes the value True or False (e.g., REWARD) Observable predicates A(x), B(X), … (e.g., NUM, RED) Training set: values of CONCEPT for some combinations of values of the observable predicates Find a representation of CONCEPT in the form: CONCEPT(x)  S(A,B, …)where S(A,B,…) is a sentence built with the observable predicates, e.g.: CONCEPT(x)  A(x)  (B(x) v C(x))Example setExample set An example consists of the values of CONCEPT and the observable predicates for some object x A example is positive if CONCEPT is True, else it is negative The set E of all examples is the example set The training set is a subset of Ea small one!An hypothesis is any sentence h of the form: CONCEPT(x)  S(A,B, …)where S(A,B,…) is a sentence built with the observable predicates The set of all hypotheses is called the hypothesis space H An hypothesis h agrees with an example if it gives the correct value of CONCEPTHypothesis SpaceHypothesis Space++++++++++++------------Example set XInductive Learning Inductive Learning SchemeSchemeHypothesis space HTraining set Inductivehypothesis hSize of the Hypothesis Size of the Hypothesis SpaceSpace n observable predicates 2n entries in truth table In the absence of any restriction (bias), there are hypotheses to choose from n = 6  2x1019 hypotheses! 22nRewarded Card ExampleRewarded Card ExampleBackground knowledge KB:((r=1) v … v (r=10))  NUM([r,s])((r=J ) v (r=Q) v (r=K))  FACE([r,s])((s=S) v (s=C))  BLACK([r,s])((s=D) v (s=H))  RED([r,s])Training set :REWARD([4,C])  REWARD([7,C])  REWARD([2,S]) REWARD([5,H])  REWARD([J,S])Possible inductive hypothesis:h (NUM(x)  BLACK(x)  REWARD(x))h1  NUM(x)  BLACK(x)  REWARD(x)h2  BLACK([r,s])  (r=J)  REWARD([r,s])h3  ([r,s]=[4,C])  ([r,s]=[7,C])  [r,s]=[2,S])   ([r,s]=[5,H])   ([r,s]=[J,S])  REWARD([r,s])agree with all the examples in the training setMultiple Inductive Multiple Inductive HypothesesHypothesesNeed for a system of preferences – called a


View Full Document

UMD CMSC 421 - Learning - Decision Trees

Download Learning - Decision Trees
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Learning - Decision Trees and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Learning - Decision Trees 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?