CMU CS 10701 - NONPARAMETRIC CLASSIFICATION AND ERROR ESTIMATION (10 pages)

Previewing pages 1, 2, 3 of 10 page document View the full content.
View Full Document

NONPARAMETRIC CLASSIFICATION AND ERROR ESTIMATION



Previewing pages 1, 2, 3 of actual document.

View the full content.
View Full Document
View Full Document

NONPARAMETRIC CLASSIFICATION AND ERROR ESTIMATION

123 views


Pages:
10
School:
Carnegie Mellon University
Course:
Cs 10701 - Introduction to Machine Learning
Introduction to Machine Learning Documents
Unformatted text preview:

Chapter 7 NONPARAMETRIC CLASSIFICATION AND ERROR ESTIMATION After studying the nonparametric density estimates in Chapter 6 we are now ready to discuss the problem of how to design nonparumetric clussifiers and estimate their classification errors A nonparametric classifier does not rely on any assumption concerning the structure of the underlying density function Therefore the classifier becomes the Bayes classifier if the density estimates converge to the true densities when an infinite number of samples are used The resulting error is the Bayes error the smallest achievable error given the underlying distributions As was pointed out in Chapter 1 the Bayes error is a very important parameter in pattern recognition assessing the classifiability of the data and measuring the discrimination capabilities of the features even before considering what type of classifier should be designed The selection of features always results in a loss of classifiability The amount of this loss may be measured by comparing the Bayes error in the feature space with the Bayes error in the original data space The same is true for a classifier The performance of the classifier may be compared with the Bayes error in the original data space However in practice we never have an infinite number of samples and due to the finite sample size the density estimates and subsequently the estimate of the Bayes error have large biases and variances particularly in a high dimensional space 300 7 Nonparametric Classification and Error Estimation 301 A similar trend was observed in the parametric cases of Chapter 5 but the trend is more severe with a nonparametric approach These problems are addressed extensively in this chapter Both Parzen and kNN approaches will be discussed These two approaches offer similar algorithms for classification and error estimation and give similar results Also the voting kNN procedure is included in this chapter because the procedure is very popular although this approach is slightly different from the kNN density estimation approach 7 1 General Discussion Parzen Approach Classifier As we discussed in Chapter 3 the likelihood ratio classfier is given by InpI X p2 X r where the threshold t is determined in various ways depending on the type of classifier to be designed e g Bayes NeymanPearson minimax etc In this chapter the true density functions are replaced by their estimates discussed in Chapter 6 When the Parzen density estimate with a kernel function IC is used the likelihood ratio classifier becomes where S X X X 2 X is the given data set Equation 7 1 classifies a test sample X into either o1or 02 depending on whether the lefthand side is smaller or larger than a threshold t Error estimation In order to estimate the error of this classifier from the given data set S we may use the resubstitution R and leave one out L methods to obtain the lower and upper bounds for the Bayes error In the R method all available samples are used to design the classifier and the same sample set is tested Therefore when a sample Xi from o1is tested the following equation is used 302 Introduction to Statistical Pattern Recognition If is satisfied Xi is correctly classified and if is satisfied Xi is misclassified The R estimate of the q error cIR is obtained by testing Xi Xyl counting the number of misclassified samples and dividing the number by N I Similarly 2R is estimated by testing xi2 x On the other hand when the L method is applied to test Xi1 Xi must be excluded from the design set Therefore the numerator of 7 2 must be replaced by Again Xi k l N I are tested and the misclassified samples are counted Note that the amount subtracted in 7 3 K 0 does not depend on k When an 02 sample is tested the denominator of 7 2 is modified in the same way Typical kernel functions such as 6 3 generally satisfy 2 0K Y and subsequently 2 0pj Y Then That is the L density estimate is always smaller than the R density estimate Therefore the left hand side of 7 2 is larger in the L method than in the R method and consequently Xi has more of a chance to be misclassified Also note that the L density estimate can be obtained from the R density estimate by simple scalar operations subtracting K 0 and dividing by N 1 Therefore the computation time needed to obtain both the L and R density estimates is almost the same as that needed for the R density estimate alone 7 Nonparametric Classification and Error Estimation 303 H N Approach Classifier Using the kNN density estimate of Chapter 6 the likelihood ratio classifier becomes k l l N 2 lX2I dz xk N x n In dI Xil NN X In k 2 1 N I IC 112 0 r wz 7 5 where n 12r1 n 2 1 IC l 2d from B l and df Y X Y X TC l Y X In order to classify a test sample X the k l t h NN from oI and the k 2 t h NN from o2 are found the distances from X to these neighbors are measured and these distances are inserted into 7 5 to test whether the left hand side is smaller or larger than t In order to avoid unnecessary complexity k k2 is assumed in this chapter 11 Error estimation The classification error based on a given data set S can be estimated by using the L and R methods When Xi1 from o1is tested by the R method Xi1 must be included as a member of the design set Therefore when the kNN s of Xi are found from the wI design set Xi itself is included among these kNN s Figure 7 1 shows how the kNN s are selected and how the distances to the kth NN s are measured for k 2 Note in Fig 7 1 that the locus of points equidistant from Xi becomes ellipsoidal because the distance is normalized by E Also since C l C2 in general two different ellipsoids are used for o and 02 In the R method Xi1 and Xi are the nearest and second nearest neighbors of Xi1 from o1 while X and X are the nearest and second nearest neighbors of Xi1 from 02 Thus On the other hand in the L method Xi is no longer considered a member of the design set Therefore X h and X g N are selected as the nearest and second nearest neighbors of Xi from 0 The selection of o2neighbors is the same as before Thus Introduction to Statistical Pattern Recognition 304 I Fig 7 1 Selection of neighbors 7 7 Obviously d I X N Xl 2 d X k Xl making the left hand side of 7 7 larger than the left hand side of 7 6 Thus Xi is more likely to be misclassified in the L method than in R method Also note that in order to find the NN sample the distances to all samples must be computed and compared Therefore when …


View Full Document

Access the best Study Guides, Lecture Notes and Practice Exams

Loading Unlocking...
Login

Join to view NONPARAMETRIC CLASSIFICATION AND ERROR ESTIMATION and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view NONPARAMETRIC CLASSIFICATION AND ERROR ESTIMATION and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?