Decision trees10-601Machine LearningTypes of classifiers• We can divide the large variety of classification approaches into roughly two main types 1. Instance based classifiers- Use observation directly (no models)- e.g. K nearest neighbors2. Generative:- build a generative statistical model- e.g., Bayesian networks3. Discriminative- directly estimate a decision rule/boundary- e.g., decision treeDecision trees• One of the most intuitive classifiers• Easy to understand and construct• Surprisingly, also works very (very) well** More on this towards the end of this lectureLets build a decision tree!Structure of a decision treeACIFyes noyesyesnoA age > 26I income > 40KC citizenF female10• Internal nodes correspond to attributes (features)• Leafs correspond to classification outcome• edges denote assignmentNetflixDatasetAttributes (features)LabelMovie Type Length Director Famous actors Liked?m1 Comedy Short Adamson No Yesm2 Animated Short Lasseter No Nom3 Drama Medium Adamson No Yesm4 animated long Lasseter Yes Nom5 Comedy Long Lasseter Yes Nom6 Drama Medium Singer Yes Yesm7 animated Short Singer No Yesm8 Comedy Long Adamson Yes Yesm9 Drama Medium Lasseter No YesBuilding a decision treeFunction BuildTree(n,A) // n: samples (rows), A: attributesIf empty(A) or all n(L) are the samestatus = leafclass = most common class in n(L)elsestatus = internala bestAttribute(n,A)LeftNode = BuildTree(n(a=1), A \ {a}) RightNode = BuildTree(n(a=0), A \ {a})endendBuilding a decision treeFunction BuildTree(n,A) // n: samples (rows), A: attributesIf empty(A) or all n(L) are the samestatus = leafclass = most common class in n(L)elsestatus = internala bestAttribute(n,A)LeftNode = BuildTree(n(a=1), A \ {a}) RightNode = BuildTree(n(a=0), A \ {a})endendn(L): Labels for samples in this setWe will discuss this function nextRecursive calls to create left and right subtrees, n(a=1) is the set of samples in n for which the attribute a is 1Identifying ‘bestAttribute’• There are many possible ways to select the best attribute for a given set.• We will discuss one possible way which is based on information theory and generalizes well to non binary variablesEntropy• Quantifies the amount of uncertainty associated with a specific probability distribution• The higher the entropy, the less confident we are in the outcome• DefinitionClaude Shannon (1916 –2001), most of the work was done in Bell labs)(log)()(2cXpcXpXHcEntropy• Definition• So, if P(X=1) = 1 then• If P(X=1) = .5 then)(log)()(2iXpiXpXHi00log01log1)0(log)0()1(log)1()(22 XpxpXpxpXH15.log5.log5.5.log5.)0(log)0()1(log)1()(22222 XpxpXpxpXHH(X)Interpreting entropy • Entropy can be interpreted from an information standpoint• Assume both sender and receiver know the distribution. How many bits, on average, would it take to transmit one value?• If P(X=1) = 1 then the answer is 0 (we don’t need to transmit anything)• If P(X=1) = .5 then the answer is 1 (either values is equally likely)• If 0<P(X=1)<.5 or 0.5<P(X=1)<1 then the answer is between 0 and 1- Why?Conditional entropyMovie lengthLiked?Short YesShort NoMedium Yeslong NoLong NoMedium YesShort YesLong YesMedium Yes• Entropy measures the uncertainty in a specific distribution• What if both sender and receiver know something about the transmission?• For example, say I want to send the label (liked) when the length is known• This becomes a conditional entropy problem: H(Li | Le=v) Is the entropy of Liked among movies with length vConditional entropy: Examples for specific valuesMovie lengthLiked?Short YesShort NoMedium Yeslong NoLong NoMedium YesShort YesLong YesMedium YesLets compute H(Li | Le=v) 1. H(Li | Le = S) = .92Conditional entropy: Examples for specific valuesMovie lengthLiked?Short YesShort NoMedium Yeslong NoLong NoMedium YesShort YesLong YesMedium YesLets compute H(Li | Le=v) 1. H(Li | Le = S) = .922. H(Li | Le = M) = 03. H(Li | Le = L) = .92Conditional entropyMovie lengthLiked?Short YesShort NoMedium Yeslong NoLong NoMedium YesShort YesLong YesMedium Yes• We can generalize the conditional entropy idea to determine H( Li | Le)• That is, what is the expected number of bits we need to transmit if both sides know the value of Le for each of the records (samples)• Definition:iiYXHiYPYXH )|()()|(We explained how to compute this in the previous slidesConditional entropy: ExampleMovie lengthLiked?Short YesShort NoMedium Yeslong NoLong NoMedium YesShort YesLong YesMedium Yes• Lets compute H( Li | Le)H( Li | Le) = P( Le = S) H( Li | Le=S)+ P( Le = M) H( Li | Le=M)+ P( Le = L) H( Li | Le=L) =1/3*.92+1/3*0+1/3*.92 =0.61iiYXHiYPYXH )|()()|(we already computed: H(Li | Le = S) = .92H(Li | Le = M) = 0H(Li | Le = L) = .92Information gain• How much do we gain (in terms of reduction in entropy) from knowing one of the attributes• In other words, what is the reduction in entropy from this knowledge• Definition: IG(X|Y)* = H(X)-H(X|Y)*IG(X|Y) is always ≥ 0Proof: Jensen inequalityWhere we are• We were looking for a good criteria for selecting the best attribute for a node split• We defined the entropy, conditional entropy and information gain• We will now use information gain as our criteria for a good split• That is, BestAttribute will return the attribute that maximizes the information gain at each nodeBuilding a decision treeFunction BuildTree(n,A) // n: samples (rows), A: attributesIf empty(A) or all n(L) are the samestatus = leafclass = most common class in n(L)elsestatus = internala bestAttribute(n,A)LeftNode = BuildTree(n(a=1), A \ {a}) RightNode = BuildTree(n(a=0), A \ {a})endendBased on information gainExample: Root attributeMovie Type Length Director Famous actorsLiked?m1 Comedy Short Adamson No Yesm2 Animated Short Lasseter No Nom3 Drama Medium Reiner No Yesm4 animated long Adamson Yes Nom5 Comedy Long Lasseter Yes Nom6 Drama Medium Singer Yes YesM7 animated Short Singer No Yesm8 Comedy Long Marshall Yes Yesm9 Drama Medium Linklater No YesP(Li=yes) = 2/3H(Li) = .91H(Li | T) =H(Li | Le) =H(Li | D) =H(Li | F) =Example: Root attributeMovie Type Length Director Famous actorsLiked?m1 Comedy Short Adamson No Yesm2 Animated Short Lasseter No Nom3 Drama Medium Adamson No Yesm4 animated long Lasseter Yes Nom5 Comedy Long Lasseter Yes Nom6 Drama Medium Singer Yes YesM7 animated Short Singer
View Full Document