Logistic RegressionMachine Learning 10-701Tom M. MitchellCenter for Automated Learning and DiscoveryCarnegie Mellon UniversitySeptember 29, 2005Required reading: • Mitchell draft chapter (see course website)Recommended reading: • Bishop, Chapter 3.1.3, 3.1.4• Ng and Jordan paper (see course website)Naïve Bayes: What you should know• Designing classifiers based on Bayes rule• Conditional independence– What it is– Why it’s important• Naïve Bayes assumption and its consequences– Which (and how many) parameters must be estimated under different generative models (different forms for P(X|Y) )• How to train Naïve Bayes classifiers– MLE and MAP estimates – with discrete and/or continuous inputsGenerative vs. Discriminative ClassifiersWish to learn f: X Æ Y, or P(Y|X)Generative classifiers (e.g., Naïve Bayes):• Assume some functional form for P(X|Y), P(Y)•This is the ‘generative’ model• Estimate parameters of P(X|Y), P(Y) directly from training data• Use Bayes rule to calculate P(Y|X= xi)Discriminative classifiers:• Assume some functional form for P(Y|X)•This is the ‘discriminative’ model• Estimate parameters of P(Y|X) directly from training data• Consider learning f: X Æ Y, where• X is a vector of real-valued features, < X1…Xn>• Y is boolean• We could use a Gaussian Naïve Bayes classifier• assume all Xiare conditionally independent given Y• model P(Xi| Y = yk) as Gaussian N(μik,σ)• model P(Y) as Bernoulli (π)• What does that imply about the form of P(Y|X)?• Consider learning f: X Æ Y, where• X is a vector of real-valued features, < X1…Xn>• Y is boolean• assume all Xiare conditionally independent given Y• model P(Xi| Y = yk) as Gaussian N(μik,σi)• model P(Y) as Bernoulli (π)• What does that imply about the form of P(Y|X)?Very convenient!impliesimpliesimplieslinear classification rule!Derive form for P(Y|X) for continuous XiVery convenient!impliesimpliesimplieslinear classification rule!Logistic functionLogistic regression more generally• Logistic regression in more general case, where Y ∈{Y1... YR} : learn R-1 sets of weightsfor k<Rfor k=RTraining Logistic Regression: MCLE• Choose parameters W=<w0, ... wn> to maximize conditional likelihoodof training data• Training data D = • Data likelihood = • Data conditional likelihood = whereExpressing Conditional Log LikelihoodMaximizing Conditional Log LikelihoodGood news: l(W) is concave function of WBad news: no closed-form solution to maximize l(W)Maximize Conditional Log Likelihood: Gradient AscentGradient ascent algorithm: iterate until change < εFor all i, repeatThat’s all M(C)LE. How about MAP?• One common approach is to define priors on W– Normal distribution, zero mean, identity covariance• Helps avoid very large weights and overfitting• MAP estimateMLE vs MAP • Maximum conditional likelihood estimate• Maximum a posteriori estimateNaïve Bayes vs. Logistic Regression• Generative and Discriminative classifiers• Asymptotic comparison (# training examples Æ infinity)• when model correct• when model incorrect• Non-asymptotic analysis• convergence rate of parameter estimates• convergence rate of expected error• Experimental results[Ng & Jordan, 2002]Naïve Bayes vs Logistic RegressionConsider Y and Xiboolean, X=<X1... Xn>Number of parameters:• NB: 2n +1• LR: n+1Estimation method:• NB parameter estimates are uncoupled• LR parameter estimates are coupledWhat is the difference asymptotically?Notation: let denote error of hypothesis learned via algorithm A, from m examples• If assumed naïve Bayes model correct, then• If assumed model incorrectNote assumed discriminative model can be correct even when generative model incorrect, but not vice versaRate of covergence: logistic regressionLet hDis,mbe logistic regression trained on m examples in ndimensions. Then with high probability:Implication: if we wantfor some constant , it suffices to pick Æ Convergences to its classifier, in order of n examples(result follows from Vapnik’s structural risk bound, plus fact that VCDim of n dimensional linear separators is n )Rate of covergence: naïve BayesConsider first how quickly parameter estimates converge toward their asymptotic values. Then we’ll ask how this influences rate of convergence toward asymptotic classification error.Rate of covergence: naïve Bayes parametersSome experiments from UCI data setsWhat you should know:• Logistic regression– Functional form follows from Naïve Bayes assumptions– But training procedure picks parameters without the conditional independence assumption– MLE training: pick W to maximize P(Y | X, W)– MAP training: pick W to maximize P(W | X,Y)• ‘regularization’• Gradient ascent/descent– General approach when closed-form solutions unavailable• Generative vs. Discriminative classifiers– Bias vs. variance
View Full Document