Module 2:1 - Linear Models

From SSMS
Jump to: navigation, search

Youtube Playlist Associated Files

A - Introduction[edit]

What we will be exploring in this module is the mathematical models that allow us to work through new types of designs

These designs include:[edit]

  • Repeated Measures Designs: For when we have more than one measure for each subject either over time or over different conditions
  • Multigroup Designs: For when we have multiple different groups or multiple conditions subjects are in
  • Continuous Predictor Designs: For when we have designs in which our predictor is continuous or quantitative as opposed to a type of group measurement

All of these designs require mathematical models to help us make an inference about our population and two of the models we will be using are: functional and statistical relationships.

Functional Relationships[edit]

  • Functional relationships can describe an actual relationship.

Examples - Laundry[edit]

In this example if we measured how much 100 randomly selected individuals spent doing their laundry, even though we would be able to observe variability in our data we could easily explain the functional relationship between what these individuals are doing and how much they're paying. In other words, the number of loads of laundry a person does will explain exactly how much they will end up spending.


Functional Relationship of Laundry Loads

This is a linear model since we are adding up one side of the equation to equal the other. The general equation looks like this:

Functional Relationship General Equation

Examples - Halloween at Downtown[edit]

In this example we have 100 randomly selected individuals, and we can explain the variability in our data in terms of something like number of drinks.

Functional Relationship in Terms of Drinks


Examples - Parking Permits[edit]

One example that uses groups instead of a continuous predictor is that of the cost of parking permits at UCSD. Once again we have 100 randomly selected individuals. We can explain the variability in our data in terms of the type of permits these individuals purchased.


Functional Relationship in Terms of Parking Passes


But another way to represent this is representing the price of the permits as a difference from an overall average like this:

Price as a Difference from an Overall Average

This would then look like this and is known as an Effects Coded Model which represents the group's location as an overall average plus a deviation from that overall average.

Effects Coded Model


Modeling Statistical Relationships[edit]

We don't usually have exact or perfect knowledge of the functional relationships between variables, and that's when we turn to modeling statistical relationships. These models add in Individual Error into the model we have just seen. This individual error will take into account all of the things we didn't measure, which means that we won't be able to perfectly represent an individual's score as we did before.


Statistical Relationship Model


Example - Flight Costs[edit]

One example is that of flight costs, we again have 100 randomly selected individuals. For factors like flight duration although we are able to fit a line through our data, it doesn't perfectly represent every single score in our data.


Flight Duration

However, if we tried explaining the variance using a grouping variable instead (such as airline) rather than the continuous variable of flight duration, we are able to find a difference in terms of average cost.

Relationship in terms of Airline
Don't forget adding these two is equal to the group mean

Notice that there is still variability in each group, remember that this is captured by the individual error.

Summary[edit]

Summary.PNG
Remember that the individual error is what sets our benchmark to understand how big the effects we measure actually are.

C - Components of the One Factor Linear Model[edit]

This is the overall equation for the Linear Model for populations, from Dr. Parris' Components of the One Factor Linear Model I

1 - The General Linear Model[edit]

There are two sets of models that are part of the General Linear Models class:

1 - Regression Models[edit]

An example of Regression Model, from Dr. Parris' Components of the One Factor Linear Model I

There is a continuous predictor along the x-axis and a continuous outcome along the y-axis.

The X variable is Interval or Ratio Scaled

2 - Mean Structure Models[edit]

(Analysis of Variance 'ANOVA' Models)

An example of a Mean Structure or ANOVA Model, from Dr. Parris' Components of the One Factor Linear Model I

This is used to model the structure of different means. There are factors along the x-axis and continuous outcomes along the y-axis.

The X variable is Nominal or Ordinal Scaled

2 - One Factor Linear Model (Population)[edit]

Note: Even in the population there is individual error. We still have other variables that could be in effect.

1 - Yij or The Score on Y[edit]

This is the airlines example of Yij. Dr. Parris takes the group (i) and find the individual (j), from Dr. Parris' Components of the One Factor Linear Model I

Yij represents the groups of individuals or particular individuals in a population. Another way of thinking about it is as the "ith individual in the jth group". The j subscript represents the group or factor along the x-axis and the i subscript represents the individual value or outcome along the y-axis in that group.

In the airplane example, Dr. Parris finds the Yij or the Score on Y by going to a particular group symbolized by i (in this case the 1 refers to Delta) then he finds the individual score in that group symbolized by j to find the particular person in this group.

2 - Mu or The Grand Mean[edit]

This is the airlines example of Yij. Dr. Parris finds the grand mean of the whole population, from Dr. Parris' Components of the One Factor Linear Model I

Mu is the overall grand mean (we don't need one mean for each group so there is no subscript of i or j) after the Mu symbol.

Here to fin the mu, Dr. Parris finds the overall average of every individual in the population. This is signified by the blue line along the whole graph.

3 - Tj or Treatment Effect for Group J[edit]

This is the airlines example of Tj. Dr. Parris finds the differences between the group means and the grand mean, from Dr. Parris' Components of the One Factor Linear Model I

Tah or Tj, refers to the different offsets for different groups. Offsets are the difference between the grand mean and the individual group means.

Dr. Parris finds the Tj for each group by finding the difference between the group means and the overall or grand mean of the population. T1 was 11.80, T2 was -33.47 and T3 was -8.33. The positive or negative sign tells us whether the group means was above or below the grand mean and the value tells us how far based on the values that y is measured on.

4 - Eij or Error[edit]

This is the airlines example of Tj. Dr. Parris finds the Eij or error, from Dr. Parris' Components of the One Factor Linear Model II

E or Error is the difference between the y score of a individual from the group mean. In other words it is how far an individual value is away from its group mean. This is a very important piece because the distribution of this error gives us information about how variable people are on whatever we have measured. This can tell us whether there is actually a Tj or treatment effect.

Here Dr. Parris finds the Eij or Error by computing the difference between an individual's y score and that one group mean. He brings up the example of Tom the individual that we found for the Yij. Here we go to the group Delta signified but the subscript of i equalling 1 and then going to the individual signified by the j value 10. Then we take this Yij value and find the difference between it and the group mean.

3 - The Distribution of E, Error[edit]

The distribution of E shows us that within each group we can see that individuals are differing from their own group mean. Remember that the E, or error, is the difference between the individual observation and the group mean. When subtracting of the mean for every individual, we make all of the means equal to 0. This allows us to make better assumptions about the error within each group. We are making an assumption called homogeneity of variance; we assume that the distribution of error does not depend on which group an individual comes from. When we work with these models we can get an idea of the error.

So as we said in the beginning, this equation allows us to understand that the error that we measure after using this model is due to something that we haven't measured in this model. Just because we have this mathematical model does not mean that we won't have error. Error is the difference between an individual's score and the score that we predicted for them based on the group they are from.

D - The One Factor Linear Model - Sample Form[edit]

Sample models give us a way to estimate the population model. In particular, we are looking to measure Tj, which is the offset treatment of each group. Remember, the values in a sample model are not parameters, but rather, estimates of the values in a population model.

Components of One Factor Linear Models[edit]

The components of One Factor Linear Model in a Sample are the same as in a population, but instead, the values represent estimates and not true parameters of a population.

Yij - the Score on Y for the ith individual in the jth group

Ybar - the estimated grand mean

tj - estimated treatment effect for Group J

eij - the residual mean, or rather, the difference between an individual score and the group that they are a part of

Treatment Offsets[edit]

We are most interested in the treatment offsets in a sample because we want them to reflect the true offsets in a population model. Treatment offsets, again, is how much an group differs from the grand mean. There are two things we must point out when looking at treatment offsets.

1 - To calculate a treatment offset, all you have to do is find the difference between the mean of the group and the grand mean.

2 - Next, all the tj values will have sampling error, which means that the values will not be exactly equal to the population T.

OUR GOAL: To evaluate the tj values in a sample model and determine if there is a true difference in a population model. Remember, there will always be a difference between the group mean and the grand mean, so what is important for us to know is if this difference is large enough to reject our null hypothesis.

Residuals[edit]

First, we must look at our residual error (eij). In a sample model, this value represents how much an individual score differs from the mean of the group the individual belongs to. The residual error, eij, is equal to Yij - Y(hat)ij, that is, the individual's actual observed value - the value predicted by the model. Remember, the "hat" tells us that the value is predicted.

The Y(hat)ij is equal to the estimated grand mean (Y bar) + the estimated treatment effect for Group j (tj). By removing the error, we are left with the predicted score for an individual, which is equal to the mean of the group they belong to.

E - Estimating the Mean Squared Error[edit]

Calculating Variance in Sample Data[edit]

In sample models, we assume that the variance of each group within the population is the same. To calculate the variance of the sample data, we must use sums of squares for error, that is, we must find the sum of each individual's squared error.

Remember, the variance is also called the mean squared error. To find the mean squared error:

1 - Find the sums of squares for error

2 - Divide that number by the degrees freedom for error

The mean squared error is going to be the best unbiased guess of the variance in the population, or how much the individuals in a group differ from their group average.

Once we calculate the variance, we can understand how much the differences are due to sampling error or if there is a reason to infer that the treatment offsets are due to group differences.

F - Inference About Treatment Effect from One Factor Linear Model[edit]

Inferences[edit]

When we make inferences about the treatment effects in one factor linear models in a sample, we have two options.

1 - Ho (the null hypothesis): all tj will be exactly zero, or rather, the grouping variable will not be used in the population

2 - H1 (the alternative hypothesis): not all tj will be equal to zero, or rather, the treatment offset will be used to determine differences between groups in the population