Unformatted text preview:

Slide 1Slide 2Slide 3Slide 4Slide 5Slide 6Slide 7Slide 8Slide 9Slide 10Slide 11Slide 12Slide 13Simple Linear RegressionIn the previous lectures, we only focus on one random variable. In many applications, we often work with a pair of variables. For example the distance travels and the time spent driving; one’s age and height. Generally, there are two types of relationships between a pair of variable: deterministic relationship and probabilistic relationship.Deterministic relationshiptimedistancevtss 0 S: distance travel S0: initial distance v: speed t: traveledS0vslopeinterceptProbabilistic RelationshipageheightIn many occasions we are facing a different situation. One variable is related to another variable as in the following. Here we can not definitely to predict one’s height from his age as we did in vtss 0Linear RegressionStatistically, the way to characterize the relationship between two variables as we shown before is to use a linear model as in the following: bxayHere, x is called independent variable y is called dependent variable  is the error term a is intercept b is slopexyabError: Least Square LinesGiven some pairs of data for independent and dependent variables, we may draw many lines through the scattered pointsxyThe least square line is a line passing through the points that minimize the vertical distance between the points and the line. In other words, the least square line minimizes the error term .Least Square MethodFor notational convenience, the line that fits through the points is often written as bxay ˆThe linear model we wrote before is  bxayIf we use the value on the line, ŷ , to estimate y, the difference is (y- ŷ) For points above the line, the difference is positive, while the difference is negative for points below the line. bxay ˆŷy(y- ŷ)For some points, the values of (y- ŷ) are positive (points above the line) and for some other points, the values of (y- ŷ) are negative (points below the line). If we add all these up, the positive and negative values can get cancelled. Therefore, we take a square for all these difference and sum them up. Such a sum is called the Error Sum of Squares (SSE)niyySSE12)ˆ(The constant a and b is estimated so that the error sum of squares is minimized, therefore the name least squares.Error Sum of SquaresEstimating Regression CoefficientsIf we solve the regression coefficients a and b from by minimizing SSE, the following are the solutions. niiniiixxyyxxb121)())((xbya Where xi is the ith independent variable value yi is dependdent variable value corresponding to xi x_bar and y_bar are the mean value of x and y.The constant b is the slope, which gives the change in y (dependent variable) due to a change of one unit in x (independent variable). If b> 0, x and y are positively correlated, meaning y increases as x increases, vice versus. If b<0, x and y are negatively correlated. Interpretation of a and bb>0xyab<0xyaCorrelation CoefficientAlthough now we have a regression line to describe the relationship between the dependent variable and the independent variable, it is not enough to characterize the relationship between x and y. We may see the situation in the following graphs.xyxy(a)(b)Obviously the relationship between x and y in (a) is stronger than that in (b) even though the line in (b) is the best fit line. The statistic that characterizes the strength of the relationship is correlation coefficient or R2How R2 is Calculated?yyˆy)ˆ()ˆ( yyyyyy If we use ŷ to represents y, then the error is (y- ŷ ). However, we used ŷ to represent y, therefore the error is reduced to (y- ŷ ). Thus (ŷ- y_bar ) is the improvement. This is true for all points in the graph. To account how much total improvement we get, we take a sum of all improvements, (ŷ -y_bar). Again we face the same situation as we did while calculating variance. We take the square of the difference and sum the squared difference for all pointsR SquareSSTSSRR 2niiyySSR12)ˆ(niiyySST12)(We already calculated SSE (Error Sum of Squares) while estimating a and b. In fact, the following relationship holds true:SST=SSR+SSEyyˆyR square indicates the percent variance in y explained by the regression.Regression Sum of SquaresTotal Sum of SquaresAn Simple Linear Regression ExampleThe followings are some survey data showing how much a family spend on food in relation to household income (x=income in thousand $, y=$ on food)x y6.5 81 4 962.5 937.2 688.1 633.4 845.5 71SUMMARY OUTPUTRegression StatisticsMultiple R 0.860893R Square 0.741137Adjusted R Square 0.689364Standard Error 7.026826Observations 7ANOVA df SS MS F Significance FRegression 1 706.8329 706.8328741 14.31523 0.012843Residual 5 246.8814 49.37628233Total 6 953.7143 Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Lower 95.0% Upper 95.0%Intercept 107.1008 7.781132 13.76417115 3.63E-05 87.0988 127.1029 87.0988 127.1029X Variable 1 -5.20715 1.37626 -3.783547373 0.012843 -8.74494 -1.66936 -8.74494


View Full Document

UNC-Chapel Hill GEOG 595 - Simple Linear Regression

Download Simple Linear Regression
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Simple Linear Regression and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Simple Linear Regression 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?