Lecture 16: Logistic Regression: Goodness of Fit Information Criteria ROC analysisGoodness of FitSet up as a hypothesis testGoodness of Fit testSlide 5GoF test for Prostate Cancer ModelMore Goodness of FitInformation CriteriaSlide 9Akaike Information Criteria (AIC)AIC versus BICProstate cancer modelsAIC vs. BIC (N=380)AIC vs. BIC if N is multiplied by 10 (N=3800)ROC curve analysisSlide 16Slide 17Fitted probabilitiesROC curveSlide 20How to interpret?AUC of ROC curveUtility in model selectionROC curve of models 1, 2, and 3Sensitivity and Specificityphat = 0.50 cutoffphat = 0.25 cutoffLecture 16:Logistic Regression: Goodness of FitInformation CriteriaROC analysisBMTRY 701Biostatistical Methods IIGoodness of FitA test of how well the model explains the dataApplies to linear models and generalized linear modelsHow to do it?It is simply a comparison of the “current” model to a perfect model•What would the estimated likelihood function be in a perfect model?•What would the estimated log-likelihood function be in a perfect modelSet up as a hypothesis testHo: current modelH1: perfect modelRecall the G2 statistic comparing models: G2 = Dev(0) - Dev(1)How many parameters are there in the null model? How many parameters are there in the perfect model?Goodness of Fit testPerfect model: Assumed to be ‘saturated’ in most casesThat is, there is a parameter for each combination of predictorsIn our model? that is likely to be close to N due to the number of continuous variablesDefine c = number of parameters in saturated modelDeviance goodness of fit: Dev(0)Goodness of Fit testDeviance goodness of fit: Dev(0)If Dev(Ho) < χ2(c-p),1-α, conclude H0If Dev(Ho) > χ2(c-p),1-α conclude H1Why arent we subtracting deviances?GoF test for Prostate Cancer Model> mreg1 <- glm(cap.inv ~ gleason + log(psa) + vol + factor(dpros),+ family=binomial)> mreg0 <- glm(cap.inv ~ gleason + log(psa) + vol, family=binomial)> mreg1Coefficients: (Intercept) gleason log(psa) vol -8.31383 0.93147 0.53422 -0.01507 factor(dpros)2 factor(dpros)3 factor(dpros)4 0.76840 1.55109 1.44743 Degrees of Freedom: 378 Total (i.e. Null); 372 Residual (1 observation deleted due to missingness)Null Deviance: 511.3 Residual Deviance: 377.1 AIC: 391.1 Test Statistic: 377.1 ~ χ2(380 - 7) Threshold: χ2(373),1-α, = 419.0339p-value = 0.43More Goodness of FitThere are a lot of options!Deviance GoF is just one•Pearson Chi-square •Hosmer-Lemeshow•etcPrinciples, however, are essentially the sameGoF is not that commonly seen in medical research because it is rarely very importantInformation CriteriaInformation criterion is a measure of the goodness of fit of an estimated statistical model. It is grounded in the concept of entropy, • offers a relative measure of the information lost •describes the tradeoff precision and complexity of the model.An IC is not a test on the model in the sense of hypothesis testingit is a tool for model selection. Given a data set, several competing models may be ranked according to their ICThe model with the lowest IC is chosen as the “best”Information CriteriaIC rewards goodness of fit, but also includes a penalty that is an increasing function of the number of estimated parameters. This penalty discourages overfitting. The IC methodology attempts to find the model that best explains the data with a minimum of free parameters. More traditional approaches such as LRT start from a null hypothesis. IC judges a model by how close its fitted values tend to be to the true values. the AIC value assigned to a model is only meant to rank competing models and tell you which is the best among the given alternatives.Akaike Information Criteria (AIC)pLikAIC 2log2 Akaike, Hirotugu (1974). "A new look at the statistical model identification". IEEE Transactions on Automatic Control 19 (6): 716–723.. Bayesian Information Criteria)ln(log2 NpLikBIC Schwarz, Gideon E. (1978). "Estimating the dimension of a model". Annals of Statistics 6 (2): 461–464.AIC versus BICBIC and AIC are similarDifferent penalty for number of parameters The BIC penalizes free parameters more strongly than does the AIC. Implications: BIC tends to choose smaller modelsThe larger the N, the more likely that AIC and BIC will disagree on model selection)ln( . 2 NpvspProstate cancer modelsWe looked at different forms for volume:A: volume as continuousB: volume as binary (detectable vs. undetectable)C: 4 categories of volumeD: 3 categories of volumeE: linear + squared term for volumeAIC vs. BIC (N=380)p -2logLik AIC BICA: continuous 8 376.0 392.0 423.5B: binary 8 375.2 391.2 422.7C: 4 categories 10 373.6 393.6 433.0D: 3 categories 9 375.2 393.2 428.6E: quadratic 9 376.0 394.0 429.4AIC vs. BIC if N is multiplied by 10 (N=3800)p -2logLik AIC BICA: continuous 8 3760.0 3776.0 3825.9B: binary 8 3752.0 3768.0 3817.9C: 4 categories 10 3736.0 3756.0 3818.4D: 3 categories 9 3751.9 3769.9 3826.1E: quadratic 9 3760.0 3778.0 3834.2ROC curve analysisReceiver Operating Characteristic Curve AnalysisTraditionally, looks at the sensitivity and specificity of a ‘model’ for predicting an outcomeQuestion: based on our model, can we accurately predict if a prostate cancer patient has capsular penetration?ROC curve analysisAssociations between predictors and outcomes is not enoughNeed ‘stronger’ relationshipClassic interpretation of sens and spec•a binary test and a binary outcome•sensitivity = P(test + | true disease)•specificity = P(test - |true no disease)What is test + in our dataset?What does the model provide for us?ROC curve analysis0.00 0.25 0.50 0.75 1.00Sensitivity/Specificity0.00 0.25 0.50 0.75 1.00Probability cutoffSensitivity SpecificityFitted probabilitiesThe fitted probabilities are the probability that a NEW patient with the same ‘covariate profile’ will be a “case” (e.g., capsular penetration, disease, etc.)We select a probability ‘threshold’ to determine whether a patient is defined as a case or notSome options:•high sensitivity (e.g., cancer screens)•high specificity (e.g., PPD skin test for TB)•maximize the sum of sens and specROC curve. xi: logit capsule i.dpros detected gleason logpsai.dpros _Idpros_1-4
View Full Document