11©Carlos Guestrin 2005-2007Logistic Regression, cont.Decision TreesMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon UniversitySeptember 26th, 20072©Carlos Guestrin 2005-2007Logistic RegressionLogisticfunction(or Sigmoid): Learn P(Y|X) directly! Assume a particular functional form Sigmoid applied to a linear function of the data:ZFeatures can be discrete or continuous!23©Carlos Guestrin 2005-2007Loss functions: Likelihood v. Conditional Likelihood Generative (Naïve Bayes) Loss function: Data likelihood Discriminative models cannot compute P(xj|w)! But, discriminative (logistic regression) loss function:Conditional Data Likelihood Doesn’t waste effort learning P(X) – focuses on P(Y|X) all that matters for classification 4©Carlos Guestrin 2005-2007Optimizing concave function –Gradient ascent Conditional likelihood for Logistic Regression is concave → Find optimum with gradient ascent Gradient ascent is simplest of optimization approaches e.g., Conjugate gradient ascent much better (see reading)Gradient:Learning rate, η>0Update rule:35©Carlos Guestrin 2005-2007Gradient Descent for LRGradient ascent algorithm: iterate until change < εFor i = 1… n, repeat Equation is correct, in the last lecture I inadvertently changed the notation to:Sorry about the change, both definitions are really equivalent, the equations on this slide areconsistent with this definition.6©Carlos Guestrin 2005-2007That’s all M(C)LE. How about MAP? One common approach is to define priors on w Normal distribution, zero mean, identity covariance “Pushes” parameters towards zero Corresponds to Regularization Helps avoid very large weights and overfitting More on this later in the semester MAP estimate47©Carlos Guestrin 2005-2007M(C)AP as RegularizationPenalizes high weights, also applicable in linear regression8©Carlos Guestrin 2005-2007Large parameters → Overfitting If data is linearly separable, weights go to infinity Leads to overfitting: Penalizing high weights can prevent overfitting… again, more on this later in the semesterQuickTime™ and aTIFF (Uncompressed) decompressorare needed to see this picture.QuickTime™ and aTIFF (Uncompressed) decompressorare needed to see this picture.QuickTime™ and aTIFF (Uncompressed) decompressorare needed to see this picture.59©Carlos Guestrin 2005-2007Gradient of M(C)AP10©Carlos Guestrin 2005-2007MLE vs MAP Maximum conditional likelihood estimate Maximum conditional a posteriori estimate611©Carlos Guestrin 2005-2007G. Naïve Bayes vs. Logistic Regression 1 Generative and Discriminative classifiers focuses on setting when GNB leads to linear classifier variance ¾i(depends on feature i, not on class k) Asymptotic comparison (# training examples Æ infinity) when GNB model correct GNB, LR produce identical classifiers when model incorrect LR is less biased – does not assume conditional independence therefore LR expected to outperform GNB[Ng & Jordan, 2002]12©Carlos Guestrin 2005-2007G. Naïve Bayes vs. Logistic Regression 2 Generative and Discriminative classifiers focuses on setting when GNB leads to linear classifier Non-asymptotic analysis convergence rate of parameter estimates, n = # of attributes in X Size of training data to get close to infinite data solution GNB needs O(log n) samples LR needs O(n) samples GNB converges more quickly to its (perhaps less helpful) asymptotic estimates[Ng & Jordan, 2002]713©Carlos Guestrin 2005-2007Some experiments from UCI data setsNaïve bayesLogistic Regression14©Carlos Guestrin 2005-2007What you should know about Logistic Regression (LR) Gaussian Naïve Bayes with class-independent variances representationally equivalent to LR Solution differs because of objective (loss) function In general, NB and LR make different assumptions NB: Features independent given class → assumption on P(X|Y) LR: Functional form of P(Y|X), no assumption on P(X|Y) LR is a linear classifier decision rule is a hyperplane LR optimized by conditional likelihood no closed-form solution concave → global optimum with gradient ascent Maximum conditional a posteriori corresponds to regularization Convergence rates GNB (usually) needs less data LR (usually) gets to better solutions in the limit815©Carlos Guestrin 2005-2007Linear separability A dataset is linearly separable iff ∃ a separating hyperplane: ∃ w, such that: w0+ ∑iwixi> 0; if x={x1,…,xn} is a positive example w0+ ∑iwixi< 0; if x={x1,…,xn} is a negative example16©Carlos Guestrin 2005-2007Not linearly separable data Some datasets are not linearly separable!917©Carlos Guestrin 2005-2007Addressing non-linearly separable data – Option 1, non-linear features Choose non-linear features, e.g., Typical linear features: w0+ ∑iwixi Example of non-linear features: Degree 2 polynomials, w0+ ∑iwixi+ ∑ijwijxixj Classifier hw(x) still linear in parameters w As easy to learn Data is linearly separable in higher dimensional spaces More discussion later this semester18©Carlos Guestrin 2005-2007Addressing non-linearly separable data – Option 2, non-linear classifier Choose a classifier hw(x) that is non-linear in parameters w,e.g., Decision trees, neural networks, nearest neighbor,… More general than linear classifiers But, can often be harder to learn (non-convex/concave optimization required) But, but, often very useful (BTW. Later this semester, we’ll see that these options are not that different)1019©Carlos Guestrin 2005-2007A small dataset: Miles Per GallonFrom the UCI repository (thanks to Ross Quinlan)40 Recordsmpg cylinders displacement horsepower weight acceleration modelyear makergood 4 low low low high 75to78 asiabad 6 medium medium medium medium 70to74 americabad 4 medium medium medium low 75to78 europebad 8 high high high low 70to74 americabad 6 medium medium medium medium 70to74 americabad 4 low medium low medium 70to74 asiabad 4 low medium low low 70to74 asiabad 8 high high high low 75to78 america:: : : : : : ::: : : : : : ::: : : : : : :bad 8 high high high low 70to74 americagood 8 high medium high high 79to83 americabad 8 high high high low 75to78 americagood 4 low low low low 79to83 americabad 6 medium medium medium high 75to78 americagood 4 medium low low low 79to83 americagood 4 low low
View Full Document