Logistic Regression Generative and Discriminative Classifiers Recommended reading Ng and Jordan paper On Discriminative vs Generative classifiers A comparison of logistic regression and na ve Bayes A Ng and M Jordan NIPS 2002 Machine Learning 10 701 Tom M Mitchell Carnegie Mellon University Thanks to Ziv Bar Joseph Andrew Moore for some slides Overview Last lecture Na ve Bayes classifier Number of parameters to estimate Conditional independence This lecture Logistic regression Generative and discriminative classifiers if time Bias and variance in learning Generative vs Discriminative Classifiers Training classifiers involves estimating f X Y or P Y X Generative classifiers Assume some functional form for P X Y P X Estimate parameters of P X Y P X directly from training data Use Bayes rule to calculate P Y X xi Discriminative classifiers 1 Assume some functional form for P Y X 2 Estimate parameters of P Y X directly from training data Consider learning f X Y where X is a vector of real valued features X1 Xn Y is boolean So we use a Gaussian Na ve Bayes classifier assume all Xi are conditionally independent given Y model P Xi Y yk as Gaussian N ik model P Y as binomial p What does that imply about the form of P Y X Consider learning f X Y where X is a vector of real valued features X1 Xn Y is boolean assume all Xi are conditionally independent given Y model P Xi Y yk as Gaussian N ik model P Y as binomial p What does that imply about the form of P Y X Logistic regression Logistic regression represents the probability of category i using a linear function of the input variables P Y i X x g wi0 wi1x1 K wid xd where for i k g zi e zi K 1 1 e zj j 1 and for k g zk 1 K 1 1 e j 1 zj Logistic regression The name comes from the logit transformation log g zi p Y i X x log w0 wi1 x1 K wid xd p Y K X x g zk Binary logistic regression We only need one set of parameters w0 w1x1 K wd xd e p Y 1 X x w0 w1x1 K wd xd 1 e 1 1 e w0 w1x1 K wd xd 1 1 e z This results in a squashing function which turns linear predictions into probabilities Logistic regression vs Linear regression 1 P Y 1 X x z 1 e Example Log likelihood l w i 1 yi log p xi w 1 yi log 1 p xi w N p xi w 1 i 1 yi log log xi w 1 p xi w 1 e N i 1 yi xi w log 1 e N xi w Log likelihood l w i 1 yi log p xi w 1 yi log 1 p xi w N p xi w 1 i 1 yi log log xi w 1 p xi w 1 e N i 1 yi xi w log 1 e N xi w Note this likelihood is a concave in w Maximum likelihood estimation N xi w l w y x w log 1 e i 1 i i wj wj Common but not only approaches Numerical Solutions Line Search Simulated Annealing Gradient Descent Newton s Method Matlab glmfit function K i 1 xij yi p xi w N prediction error No close form solution Gradient descent Gradient ascent w w xij yi p xi w t 1 j t j i Iteratively updating the weights in this fashion increases likelihood each round We eventually reach the maximum We are near the maximum when changes in the weights are small Thus we can stop when the sum of the absolute values of the weight differences is less than some small number Example We get a monotonically increasing log likelihood of the training labels as a function of the iterations Convergence The gradient ascent learning method converges when there is no incentive to move the parameters in any particular direction x y p x w 0 ij i i This condition means that the prediction error is uncorrelated with the components of the input vector i k Na ve Bayes vs Logistic Regression Ng Jordan 2002 Generative and Discriminative classifiers Asymptotic comparison training examples infinity when model correct when model incorrect Non asymptotic analysis convergence rate of parameter estimates convergence rate of expected error Experimental results Generative Discriminative Pairs Example assume Y boolean X X1 X2 Xn where xi are boolean perhaps dependent on Y conditionally independent given Y Generative model na ve Bayes s indicates size of set l is smoothing parameter Classify new example x based on ratio Equivalently based on sign of log of this ratio Generative Discriminative Pairs Example assume Y boolean X x1 x2 xn where xi are boolean perhaps dependent on Y conditionally independent given Y Generative model na ve Bayes Classify new example x based on ratio Discriminative model logistic regression Note both learn linear decision surface over X in this case What is the difference asymptotically Notation let denote error of hypothesis learned via algorithm A from m examples If assumed model correct e g na ve Bayes model and finite number of parameters then If assumed model incorrect Note assumed discriminative model can be correct even when generative model incorrect but not vice versa Rate of covergence logistic regression Let hDis m be logistic regression trained on m examples in n dimensions Then with high probability Implication if we want for some constant it suffices to pick Convergences to its classifier in order of n examples result follows from Vapnik s structural risk bound plus fact that VCDim of n dimensional linear separators is n Rate of covergence na ve Bayes Consider first how quickly parameter estimates converge toward their asymptotic values Then we ll ask how this influences rate of convergence toward asymptotic classification error Rate of covergence na ve Bayes parameters Some experiments from UCI data sets What you should know Logistic regression What it is How to solve it Log linear models Generative and Discriminative classifiers Relation between Na ve Bayes and logistic regression Which do we prefer when Bias and variance in learning algorithms Acknowledgment Some of these slides are based in part on slides from previous machine learning classes taught by Ziv Bar Joseph Andrew Moore at CMU and by Tommi Jaakkola at MIT I thank them for providing use of their slides
View Full Document