# Module 1:9 - Basic Hypothesis Testing

## A - T-test

### 1 - Z-test vs. T-test

The Z test requires that you know a lot of information about the population as a whole

• Mu-the true population mean
• Sigma squared-the population variance

Often we do not know the true population mean and variance, so we need a test that can tell us something about our data without knowing those two things. This is where the T-test comes in!

• in the t-test we estimate the population variance (sigma squared) with the sample variance (s squared)
• To calculate a t-statistic, we will replace sigma with s:

• the t distribution looks a little bit different than the z distribution

### 2 - General form of T-test

The core t-test formula is:

There are 3 kinds of T-tests

• One Sample
• Dependent Measures
• Independent Measures

## B - Types of T-test

### 1 - One sample

• One sample t-tests measure whether your sample differs from the true population mean in a statistically significant way.
• For example, if a company claims that their lightbulbs last 6000 hours, but your mean showed that it lasts 5895 hours. We could run a One-Sample T-test
• Keep in mind that to run One-Sample t-tests, you must know the population mean for comparison.

#### a - Example and formula

• Remember, the population mean we use is the one that would occur if the null was true:

#### b - JMP demonstration

• Analyze>Distribution, Dependent Variable in the Y section
• You will get a histogram similar to the ones we are used to looking at.
• Under the red triangle, select test mean, enter supposed mean (Ho)
• We do not know the standard deviation, so leave it blank. JMP will estimate this using sample SD (you can also select to run a nonparametric test here)
• This will pull up the t-test summary statistics. You are able to see the estimated standard error, degrees of freedom, t-statistic, and p-values for a two tailed and one tailed t-test. It also provides you with a graphical representation of these values.
• HOWEVER: Remember that statistical significance does NOT necessitate practical significance. Sometimes things just don’t matter.

### 2- Dependent measures

#### a - Example and formula

In dependent measures, we can analyze data from experiments like placebo vs. drug trials.

• We will take difference scores between conditions (i.e. placebo-treatment)
• To use a t-test here, we use this general formula
• However, we did not want to have to know the population mean. So how can we get around this?
• Think of this hypothetical experiment: We take an entire population and give them a placebo. Then we take the same population and give them the drug. We take their measurements and then take a difference score. For every individual you have a difference score. Our sample is akin to taking a random sample of those difference scores.
• If the null was true--that the two populations are not different--then the mean difference score should be 0.
• When we plug this into our t-statistic, it gets rid of the necessity of knowing the population mean.
• Therefore our t-statistic for a dependent measures test looks like this:
• We can look at the denominator of this equation

which is similar to the way we calculate any standard error

• One last thing we note is degrees of freedom. In this case it is (n-1) with n being the number of difference scores

#### b - JMP demonstration

• First create a difference score column
• You can do so using this column formula {make sure to keep track of which order you enter them in}
• Similar to the One Sample t-test
• Create a distribution for the differences scores
• Analyze>Distribution; put differences column in Y column

You will get a distribution of the difference scores Again, go to the red triangle and select “test mean”

• Set the hypothesized mean as 0 (remember if the null hypothesis is true, there will be no difference and therefore a difference score of 0).
• You will get an output that looks like this:
• and again you will be able to see the t-statistic, degrees of freedom and 2 and 1 tailed p-values.

There is also another way to do the dependent t-test in JMP

• Instead of making the difference column, go to Analyze>Matched Pairs and place your two treatments in the Y section

You will get an output like this:

• Essentially, JMP calculated the difference scores for us and spits out the dependent two tailed t-test statistics for us.
• REMEMBER: We are always using this line of logic to decide whether we can reject the Null
• In Summary: Here is what we know about t-tests so far:

### 3 - The Independent Measures t-Test

• Appropriate for independent-samples or independent-measures designs: Used when we are measuring separate groups of subjects and comparing the means of these separate groups (Also known as between-subjects design)
• Reminder: No subject experiences both treatments!
• There are two different varieties:
• Equal Variance Assumed (Pooled T-Test)
• Here we assume the standard deviations of each of our groups represent the same population standard deviation
• We use this when we don’t think our treatments will affect the standard deviations of our populations
• The degrees of freedom for a pooled t-test can be calculated as follows:

• Equal Variance NOT Assumed (Behrens-Fisher Problem)
• Here we don’t think the standard deviations of our groups represent the same population standard deviation
• We use this if we think it’s possible that our treatments will affect the standard deviations of our populations

#### a - Formula for the Independent Measures t-test

The following is the formula for an independent measures t-test:

This formula is more easily understood if we try to summarize what each part of the formula is in words:

Sample mean difference: The difference between the sample mean of the first group and the second group

Population mean difference: The difference between the population mean of the first group and the second group. In the real world, we don’t really know the population means. However, this test is assuming the null hypothesis is true, which means there is no difference between our two population means! Because of this, we can sub in “0” for the population mean difference. This is why the population mean difference is not noted as part of the independent measures t-test formula.

Estimated standard error of the mean difference: This is the estimated standard error of our numerator. Over repeated samples, it notes the differences between independent sample means that we are likely to get. It can also be described as the estimate of the standard deviation of the sampling distribution of the sample mean difference.

#### b - The Independent Samples T-Test in JMP

To illustrate how to calculate an independent samples t-test in JMP, we are going to be using the following example. This data can be found in the journal that accompanies module 1:9.

Example: We are trying to find out if our new blood pressure drug reduces blood pressure, so we design an experiment. There are two groups of people: one will take a blood pressure drug and the other will take a placebo. We assigned each group’s treatment by random assignment. The group that takes the placebo will provide the data we need for Sample 1, and the group that takes the actual drug will provide the data we need for Sample 2. At the end of the experiment we will have a sample mean and an estimated variance from each group.

Step 1. Under “Analyze”, go to “Fit Y by X”

Step 2. Our X will be “Group”. This identifies the individuals who took the placebo or the drug.

Step 3. Our Y will be “mmHG”. This identifies blood pressure.

Step 4. After clicking OK, we’ll get a dot plot like this:

The treatment group is lower than the placebo, but there is a lot of overlap!

Step 5a. To actually perform the hypothesis test: Click the little red triangle, and then select “Means/ANOVA/Pooled t”. (Remember: this assumes equal variance, which means we do not believe our treatments have changed the standard deviation of our populations).

If we look at all of the information under t Test, we can see that we have the difference between the groups, the standard error of the difference (estimate of the standard deviation of the sampling distribution of the mean differences), the t statistic, the degrees of freedom, and our two-tailed p-value.

Step 6a. As we can see, we have a two-tailed p-value of 0.0115. Because of p-value is less than our standard of evidence, an assumed alpha of 0.05 (0.025 in each tail), we can reject the null. This means we have a statistically significant effect of our drug, as the drug group’s blood pressure is lower than the placebo group. Here is a graphic to help you remember when we fail to reject the null verses when we actually get to reject the null:

If we wanted to repeat the same test, but NOT assume equal variance, we would repeat steps 1-4 and then do the following:

Step 5b. Select t-test under the drop down menu from the little red triangle. We will get an output that looks like this:

Step 6b. Under the t-Test section we can see that the mean difference is the same as the mean difference in the test where we assumed equal variance. This makes sense, as the difference observed in our samples is the same, no matter what we assume about our variance in the populations. The standard error of the difference and the t-ratio is also the same.

As we can see, our degrees of freedom (which capture how much independent information we have in our samples) are slightly different. By not assuming equal variances we give up a little bit of our independence in information.

We also have our two-tailed p-value, which is 0.0115: small enough to reject the null hypothesis at an alpha level of 0.05.

## C - Advantages and concerns of repeated measures

### 1- Advantages of repeated measures

• There are fewer subjects required.

Each person provides two pieces of data (e.g. data of before and after treatment).

• It is well-suited for longitudinal studies.

Different score shows how individual changes or grows over time.

• It is good for situations where individuals differ a lot from each other (e.g. blood pressure or IQ).

Each person is their own baseline. More statistically powerful because the different scores are less variable than the original scores.

E.g. we can see from the histograms that the individual scores widely spread for the placebo group (~ 100 to 150) and treatment group (~ 95 to 145) . However, the difference score has the mean closer to 0 and is less variable ( a bit less than 0 to 10). This gives us more statistical power to detect the difference.

histograms of placebo, treatment, and the difference score, as seen in Dr. Parris' video

### 2 - Problems with repeated measures

#### a - Potentials for carry-over and order effects

• Order effects: Because one treatment always occurs after the others, there would be some effects of practice, fatigue, time, etc. which confound our study.
• Carry-over effects: The effect of one treatment can leak into the other treatment. For example, participants take the drug treatment and then the placebo after two months; it might be that the blood pressure that we measure after two months is due to the persistent effect of drug treatment but not of placebo.

#### b - Solution

• Counterbalacing: half of the subject get one treatment first and then the other treatment (A then B), and other half of the subject get the revert order (B then A). This solution helps with eliminate small order and carry-over effects.
• Independent measure: measure separate groups of individuals .
• Matched sample:
• ideal solution for situation where order or carryover effects are very strong that we cannot use repeated measures.
• use in twin study; one twin is in one condition and the other twin is in other condition. Because twin are very similar, we treat them as a same person who goes through different conditions.