DOC PREVIEW
MIT 15 301 - Inferential statistics

This preview shows page 1-2-3-23-24-25-26-46-47-48 out of 48 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
View full document
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 48 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Inferential statisticsData description & analysisProbability calculations Statistical inferenceInferential statisticsModels ....Statistics has 3+ componentsHypothesis testingWhy test for differences?If we find that females are rated as 8 on beingnice and males are rated as 7 -- are theydifferent?After all different things are different !Why test these differences?Why test for differences?Two main reasons:Variance (measurement error, random error,other variables)Making inferences beyond the sample to thepopulation at largeVarianceWe are living in a stochastic worldHypotheses testing IHow can we test that things are different?Set 2 hypotheses that cover the entire possiblerange of outcomesH0 & H1H0- No difference Group 1 [=, , ] Group 2H1-Difference Group 1 [ , >, <] Group 2ExamplesH0 & H1-- Examples:Is a coin fairGender and gradesHealing with a new medicationAbility to cheatMarriage over timeFor each please write H0 & H1Hypotheses testing IbHypothesis testing IIWhy test a hypothesis we don’t believe in?We do this because we can only show thatsomething is wrong -- not that something isrightWhat does it mean to reject H0?If H0 is correct, the probability of getting thisresult (or a more extreme result) is very low --thus we reject H0 and (for now) accept H1Not conservative and liberal just balancing 2types of error2 types of errorsH0 iswrongH0 iscorrectCorrect Type I error Reject H0Type 2 error Correct Accept H02 types of errorsH0 iswrongH0 iscorrectCorrectType IerrorRejectH0Type 2errorCorrectAcceptH0The meaning of pWhat does p means?What is the difference between p = 0.03, p = 0.001, & p = 0.11What is the relationship between p and confidence?What is the relationship between p and effect size?What is the relationship between p and number ofsubjects?The importance of effect sizeAlways give effect size measuresMean differenceQuartile differencesetc.SummaryHypotheses testingH1 & H0 -- setting the hypothesis tosomething you don’t believe inThe meaning of p2 types of errorsEffect size !• T-test for 1 sample• T-test for 2 samples• ANOVA• Linear Regression• Non-parametric testsStatistical testsT test for 1 sampleOne sample t testOne sample t testt = - Mn(xi - )2n-1Standard deviationDifference from the comparisonConfidenceCompare it to the “t table”When there is more data, thet distribution gets closer tonormalWhat do you do with “t”Example step 1Observation Aggressivexi - µ(xi - µ)21244162222 43233 9418-24517-396 16-4167200 0Sum 140 0 58H0: average is 16H1: average 16Example step 2(xi - )2n-1 == 3.11t =n - M= 3.42T test for 2 samplestwo samples t testt =(between di ) - (expected di )sd of di Test for dependent samplesntwo samples t testt =(1 - 2) - (M1 - M2)n1 + n2 -2n1 12 + n2 22(n1 x n2)Test for 2 independent samplesn1 + n2Mean difference Hypothesized differenceWho eats more lollipops males of females?7 females; 5 males followed for a monthFemales:  = 27, 2 = 29.2Males:  = 19, 2 = 24.57Is there a difference?Example 1Calculating ...t =(27 - 19) - (0)5 + 7 -25 x 24.57 + 7 x 29.2(5 x 7)5 + 7= 2.42Does the sun creates freckles?Each subject has one side of their body in thesun and one in the shadeH0 sun side  non-sun sideH1 sun side > non-sun sideExample 2DataSubject sun shade diffd - µ(d - µ)2168-2-392125 7 636332100446-2-3957076366 9 10 -1 -2 47440-11802-2-39943100Sum 9 0 104Calculating ...= 0.831t =(1) - (0)3.60691048 == 3.606t test as an example ofinferential statisticsMean differences relative tovariance t test summaryANOVAANOVA IAnalysis of varianceThis is the same logic of the t test butallowing for more testsSo why not just do multiple t tests?1 - doing many tests can cause errors2 - there is benefit in pooling observationsacross cellsANOVA IIThe story is the same ...Looking at the variance within cellsrelative to the variance across cells (as ifthere were no treatments) and asking howmuch does the distinction between cellshelps reduce the varianceANOVA IIaANOVA IIbANOVA IIcANOVA IIcA few examples INo variance within or between groupsGroupCase# A B C OA mean1404040240404034040404404040Mean 40 40 40 40A few examples IINo variance within groups, but variancebetween groupsGroupCase# A B C OA mean1384036238403633840364384036Mean 38 40 36 38A few examples IIIVariance within groups, but not betweengroupsGroupCase# A B C OA mean1364636240373834437434404043Mean 40 40 40 40A few examples IVVariance within & between groupsGroupCase# A B C OA mean1384536239443733949354364636Mean 38 46 36 40The general formulaSS = Sum of squaresSS total = SS within + SS betweenSSt=SSw+SSbThis means:Take each of the samples, subtract it fromthe appropriate mean, and square itSum of squaresOnce we have the sum of squares wecompare the SSb and SSwAs the SSb relative to SSw gets larger theresults are more likely to be significantDegrees of freedomThis is a base to think about the amountof independent observations we have, andthus the strength of the resultsIn general we lose a degree of freedomwhen we use a mean...The formulaF =SSb / dfbSSw / dfwSSb / K-1SSw / n-k=Once you have the F value use the F tablewith the correct dfANOVAsOne way & multiple way ANOVAsMFE~EMFDVSummary IHypothesis testingBecause we can never prove anythingand only disprove things we set ahypothesis (H0) as one we do notbelieve in.Once we reject H0 we are willing toaccept H1Summary IIT test and ANOVAIt is all about variance and meandifferencesHow large is the mean differencerelative to the variance!These tests give us the probability ofthe data given that H0 is


View Full Document
Download Inferential statistics
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Inferential statistics and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Inferential statistics 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?