Machine Learning Decision Trees Overfitting Reading Mitchell Chapter 3 Bishop Section 1 6 Machine Learning 10 701 Tom M Mitchell Machine Learning Department Carnegie Mellon University September 12 2006 Machine Learning 10 701 15 781 Instructors Tom Mitchell Eric Xing TA s Fan Guo Yifen Huang Indra Rustandi See webpage for Office hours Grading policy Final exam date Late homework policy Syllabus details Course assistant Sharon Cavlovich www cs cmu edu epxing Class 10701 Machine Learning Study of algorithms that improve their performance P at some task T with experience E well defined learning task P T E Learning to Predict Emergency C Sections Sims et al 2000 9714 patient records each with 215 features Learning to detect objects in images Prof H Schneiderman Example training images for each orientation Learning to classify text documents Company home page vs Personal home page vs University home page vs Reading a noun vs verb Rustandi et al 2005 Growth of Machine Learning Machine learning is preferred approach to Speech recognition Natural language processing Computer vision Medical outcomes analysis ML apps Robot control All software apps This ML niche is growing Improved machine learning algorithms Increased data capture networking Software too complex to write by hand New sensors IO devices Demand for self customization to user environment Function Approximation and Decision tree learning Function approximation Setting Set of possible instances X Unknown target function f X Y Set of function hypotheses H h h X Y Given Training examples xi yi of unknown target function f Determine Hypothesis h H that best approximates f How would you represent AB CD E Each internal node test one attribute Xi Each branch from a node selects one value for Xi Each leaf node predict Y or P Y X leaf ID3 C4 5 node Root Entropy Entropy H X of a random variable X H X is the expected number of bits needed to encode a randomly drawn value of X under most efficient code Why Information theory Most efficient code assigns log2P X i bits to encode the message X i So expected number of bits to code one random X is of possible values for X Entropy Entropy H X of a random variable X Specific conditional entropy H X Y v of X given Y v Conditional entropy H X Y of X given Y Mututal information aka information gain of X and Y Sample Entropy Subset of S for which A v Gain S A mutual information between A and target class variable over sample S Which Tree Should We Output ID3 performs heuristic search through space of decision trees It stops at smallest acceptable tree Why Occam s razor prefer the simplest hypothesis that fits the data Split data into training and validation set Create tree that classifies training set correctly What you should know Well posed function approximation problems Instance space X Sample of labeled training data xi yi Hypothesis space H f X Y Learning is a search optimization problem over H Various objective functions minimize training error 0 1 loss among hypotheses that minimize training error select shortest Decision tree learning Greedy top down learning of decision trees ID3 C4 5 Overfitting and tree rule post pruning Extensions Questions to think about 1 Why use Information Gain to select attributes in decision trees What other criteria seem reasonable and what are the tradeoffs in making this choice Questions to think about 2 ID3 and C4 5 are heuristic algorithms that search through the space of decision trees Why not just do an exhaustive search Questions to think about 3 Consider target function f x1 x2 y where x1 and x2 are real valued y is boolean What is the set of decision surfaces describable with decision trees that use each attribute at most once
View Full Document