Module 3:1 - Regression Models

From SSMS
Jump to: navigation, search

Youtube Playlist Associated Files

Module 3:1 - Regression Models[edit]

Youtube Playlist Associated Files

Introduction to Regression Models[edit]

  • Previously, we were working with Mean Structure Models (Analysis of Variance) where X values are Nominal or Ordinal
  • Now we Switch to Regression Models where X values are Interval or Ratio
Regression.png
  • For sample Data we use lowercase letters
SampleFormula.png
  • Note: In the first example we had with number of drinks and nightclub cost, we had no error because number of drinks perfectly predicted cost. From now on, though we do have an error component.

For example: How Study time relates to Final Exam Score

  • Model grade as a function of quantitative number of study hours
    • error component included (individual differences also affect score)
  • How do we model this?
    • Table
Table.png
    • Scatterplot
Graph.png
      • in a scatterplot we can see relationships more clearly
      • we draw a line of fit to see the relationship

The One Predictor Linear Regression Model[edit]

What do we mean by Best Fitting?[edit]

  • Regression: a statistical technique for finding the best fitting line for a set of data (the regression line)
    • the lines don’t need to be straight, it can even be a plane
    • Linear regression finds the best fitting straight line using one predictor
    • rather than talking about group differences we talk about what one unit increase in X gets us in Y

ANOVA vs. Regression[edit]

  • In ANOVAs, we paid attention to the Tau sub i’s, In Regression we use X sub i (score on X for the ith individual) multiplied by Beta1 (the slope--or effect-- of the outcome variable)
    • for every unit increase of X sub 1, we will have an increase of Beta 1
  • In ANOVAs we had a grand Mean, Mu. In Regression we use the Y intercept--or Beta 0--how much of the outcome you have when you have 0 of the predictor.
  • Finally, for both ANOVAs and Regression we have the Y score set equal to our equation, and an error term.
LinearFormula.png

Graphs[edit]

  • For sample (picture)
  • On a Graph (picture)
GraphwithLabel.png
    • We see the line of best fit as the red line, for every one unity of increase in x, we get the same increase in y on this line (no matter where in X we are)
    • Error is the difference between each point and the line of fit (B1)
ErrorFormula.png
      • difference between actual score, and predicted score (predicted by B1, the line of best fit)
      • Previously we predicted an individuals group mean, now we predict a point on the line of best fit
        • Formula for Line of best fit is model without error
LinearRegressionFormula.png

So, What is “Best Fitting”?[edit]

  • line that gives us parameters (b0 and b1) that best estimate Beta0 and Beta1
    • Unbiased estimates

Least Squares[edit]

  • This method of fitting a line was developed by the mathematician Carl Friedrich Gauss in 1794.
  • Least Squares: a approach to fitting a model (line) to data where the sum of the squared distances to the data is minimized.
  • Linear Least Squares Criterion
3 1 pic1.png

Look at the two graphs below:

3 1 pic2.png

They have the same "Yi" (the height on the Y-axis of each individual) but different "Y hat i" (the location in Y for each individual's predicted score).

  • The linear least squares criterion compares individual's actual Y score to its predicted score. Plus, we have to try to minimize the square of all of the deviations. Thus, the bet fitting line is the line that minimizes the sum of squared residuals, or the line that gets closer to the point in squared space.
  • The linear least squares criterion always produces unbiased estimators and closed-form solutions.
    • The closed form solutions are the simple equations acquired by using calculus for solving linear least squares criterion. The simple equations allows us to use sample data to form our estimates, which are simple formulas for b0 and b1.

The Conditional Mean for Y given X[edit]

  • As we’ve seen, the predicted score for an individual is the intercept, plus the slope times the score on X for that individual. The corresponding version of this formula that refers to the population is as follows:
Wiki 1.png
  • This model predicts the mean of all individuals at a particular level of X.
  • Note: This model does not have an error term because we are not talking about individuals’ actual scores- we are talking about the mean of all individuals who have a particular level of x.
  • Remember: Our general linear model always assumes that error is normally distributed with a mean of zero and the same variance.


Representing the Study/Score Example Graphically:

Wiki 3.png
  • Red line: The TRUE population linear regression line. It is NOT predicting individuals. It is predicting the mean of all individuals conditional on a particular level of X.
  • Blue Dots: Mu Y Given X – The average amount of Y for each level of X (the average score at each level of time studied)
  • Green Distributions: Shows that in the population, individuals do not always get the same Y (score), even if they have the same level of X (amount studied). This is the error.
Wiki 2.png
  • In our sample model we are making a prediction for Y, but the prediction is really the mean of all individuals with a given amount of X. We thought of this as simply a group mean for a particular treatment. We can think about the center component of our population model in a similar way. We can think of our group mean as the mean of a particular group of people at a particular level of X.