1©2005-2007 Carlos Guestrin1Logistic Regression (Continued)Generative v. DiscriminativeDecision TreesMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversityJanuary 31st, 2007©2005-2007 Carlos Guestrin 2Generative v. Discriminativeclassifiers – Intuition Want to Learn: h:X a Y X – features Y – target classes Bayes optimal classifier – P(Y|X) Generative classifier, e.g., Naïve Bayes: Assume some functional form for P(X|Y), P(Y) Estimate parameters of P(X|Y), P(Y) directly from training data Use Bayes rule to calculate P(Y|X= x) This is a ‘generative’ model Indirect computation of P(Y|X) through Bayes rule But, can generate a sample of the data, P(X) = ∑y P(y) P(X|y) Discriminative classifiers, e.g., Logistic Regression: Assume some functional form for P(Y|X) Estimate parameters of P(Y|X) directly from training data This is the ‘discriminative’ model Directly learn P(Y|X) But cannot obtain a sample of the data, because P(X) is not available2©2005-2007 Carlos Guestrin 3Logistic RegressionLogisticfunction(or Sigmoid): Learn P(Y|X) directly! Assume a particular functional form Sigmoid applied to a linear functionof the data:ZFeatures can be discrete or continuous!©2005-2007 Carlos Guestrin 4Logistic Regression –a Linear classifier-6 -4 -2 0 2 4 600.10.20.30.40.50.60.70.80.913©2005-2007 Carlos Guestrin 5Very convenient!impliesimpliesimplieslinearclassificationrule!©2005-2007 Carlos Guestrin 6Logistic regression v. Naïve Bayes Consider learning f: X Y, where X is a vector of real-valued features, < X1 … Xn > Y is boolean Could use a Gaussian Naïve Bayes classifier assume all Xi are conditionally independent given Y model P(Xi | Y = yk) as Gaussian N(µik,σi) model P(Y) as Bernoulli(θ,1-θ) What does that imply about the form of P(Y|X)?Cool!!!!4©2005-2007 Carlos Guestrin 7Derive form for P(Y|X) for continuous Xi©2005-2007 Carlos Guestrin 8Ratio of class-conditional probabilities5©2005-2007 Carlos Guestrin 9Derive form for P(Y|X) for continuous Xi©2005-2007 Carlos Guestrin 10Gaussian Naïve Bayes v. Logistic Regression Representation equivalence But only in a special case!!! (GNB with class-independent variances) But what’s the difference??? LR makes no assumptions about P(X|Y) in learning!!! Loss function!!! Optimize different functions ! Obtain different solutionsSet of Gaussian Naïve Bayes parameters(feature variance independent of class label)Set of Logistic Regression parameters6©2005-2007 Carlos Guestrin 11Logistic regression for morethan 2 classes Logistic regression in more general case, whereY 2 {Y1 ... YR} : learn R-1 sets of weights©2005-2007 Carlos Guestrin 12Logistic regression more generally Logistic regression in more general case, where Y 2{Y1 ... YR} : learn R-1 sets of weightsfor k<Rfor k=R (normalization, so no weights for this class)Features can be discrete or continuous!7©2005-2007 Carlos Guestrin 13Announcements Don’t forget recitation tomorrow And start the homework early©2005-2007 Carlos Guestrin 14Loss functions: Likelihood v.Conditional Likelihood Generative (Naïve Bayes) Loss function:Data likelihood Discriminative models cannot compute P(xj|w)! But, discriminative (logistic regression) loss function:Conditional Data Likelihood Doesn’t waste effort learning P(X) – focuses on P(Y|X) all that matters forclassification8©2005-2007 Carlos Guestrin 15Expressing Conditional Log Likelihood©2005-2007 Carlos Guestrin 16Maximizing Conditional Log LikelihoodGood news: l(w) is concave function of w ! no locally optimalsolutionsBad news: no closed-form solution to maximize l(w)Good news: concave functions easy to optimize9©2005-2007 Carlos Guestrin 17Optimizing concave function –Gradient ascent Conditional likelihood for Logistic Regression is concave! Find optimum with gradient ascent Gradient ascent is simplest of optimization approaches e.g., Conjugate gradient ascent much better (see reading)Gradient:Learning rate, η>0Update rule:©2005-2007 Carlos Guestrin 18Maximize Conditional Log Likelihood:Gradient ascent10©2005-2007 Carlos Guestrin 19Gradient Descent for LRGradient ascent algorithm: iterate until change < εFor i = 1… n,repeat©2005-2007 Carlos Guestrin 20That’s all M(C)LE. How about MAP? One common approach is to define priors on w Normal distribution, zero mean, identity covariance “Pushes” parameters towards zero Corresponds to Regularization Helps avoid very large weights and overfitting More on this later in the semester MAP estimate11©2005-2007 Carlos Guestrin 21M(C)AP as RegularizationPenalizes high weights, also applicable in linear regression©2005-2007 Carlos Guestrin 22Gradient of M(C)AP12©2005-2007 Carlos Guestrin 23MLE vs MAP Maximum conditional likelihood estimate Maximum conditional a posteriori estimate©2005-2007 Carlos Guestrin 24Naïve Bayes vs Logistic RegressionConsider Y boolean, Xi continuous, X=<X1 ... Xn>Number of parameters: NB: 4n +1 LR: n+1Estimation method: NB parameter estimates are uncoupled LR parameter estimates are coupled13©2005-2007 Carlos Guestrin 25G. Naïve Bayes vs. Logistic Regression 1 Generative and Discriminative classifiers Asymptotic comparison (# training examples infinity) when model correct GNB, LR produce identical classifiers when model incorrect LR is less biased – does not assume conditional independence therefore LR expected to outperform GNB[Ng & Jordan, 2002]©2005-2007 Carlos Guestrin 26G. Naïve Bayes vs. Logistic Regression 2 Generative and Discriminative classifiers Non-asymptotic analysis convergence rate of parameter estimates, n = # of attributes in X Size of training data to get close to infinite data solution GNB needs O(log n) samples LR needs O(n) samples GNB converges more quickly to its (perhaps less helpful)asymptotic estimates[Ng & Jordan, 2002]14©2005-2007 Carlos Guestrin 27Someexperimentsfrom UCIdata setsNaïve bayesLogistic Regression©2005-2007 Carlos Guestrin 28What you should know aboutLogistic Regression (LR) Gaussian Naïve Bayes with class-independent variancesrepresentationally equivalent to LR Solution differs because of objective (loss) function In general, NB and LR make different assumptions NB: Features independent given class ! assumption on
View Full Document