**Statistics: The Basics of Comparing Means**

*Comparing Means*

In Module Seven, we focused on correlational research in which the relationship between two measured, quantitative variables was examined. However, in many research situations, the researchers are not interested in measuring variables to see the relationship; rather, they want to manipulate a variable to assess its impact on another variable. Of course, manipulating an independent variable to see its effect on a dependent variable is a fundamental way to assess causality, and it is central to experimental (as opposed to correlational) research.

When researchers manipulate an independent variable, they typically end up with several groups of participants. Imagine, for instance, that a researcher is interested in testing the effects of background noise on concentration. Perhaps the researcher would have two groups: participants randomly assigned to perform a concentration task with background noise, and participants randomly assigned to perform the concentration task without any background noise. The researcher could then compare performance on the concentration task as a way of seeing if the independent variable (background noise) had an effect on the dependent variable (concentration). Going further, perhaps the researcher could include a third group of participants who have a different type of background noise. So, for instance, there could be a group that hears white noise in the background, a group that hears rock music in the background, and a group that hears no music. The researcher could also add a fourth group, a fifth group, and so on.

The bottom line here is that when researchers manipulate an independent variable, they are left with several groups of participants. They then need to compare the scores of each group on the dependent variable to see if there is a difference. To compare the scores, researchers typically use the mean. So, they calculate the mean of each group (and the standard deviation), and they use this information to assess group differences. If the groups reliably differ, the researchers can feel confident that the independent variable has caused a change in the dependent variable.

*Types of Tests*

As you can see from the example above, there are many research situations that result in a researcher comparing group means. Depending on the experiment or the manipulation, there may be two groups, three groups, or more.

Here is a brief overview of several different types of statistical tests and the research situations to which they apply. Although different statistically, they are all ways of comparing the scores of different groups. Keep in mind that although the nature of the independent variable(s) differs in these designs, the dependent variable is always a quantitative variable.

·

**Independent samples t-test**: This test is used whenever you are comparing the scores of

**two** independent groups. Independent means that the scores from one group have no relation to the scores of the other. So, the example where we would compare the results of a group of participants who completed the concentration task with background noise to the results of those who did not would call for an independent samples t-test.

·

**Dependent samples t-test (also known as a paired t-test)**: Like the test above, this test is used when comparing the means of two groups of scores. The key difference, however, is that a dependent samples t-test is used when the two groups are not independent. A common example of this type of test occurs when each participant is measured twice and the two sets of scores are compared with each other. Say, for instance, you wanted to again test the effect of background noise on concentration performance. Rather than use two totally separate groups (which would call for an independent samples t-test), you could instead have only one group, but test each participant under two conditions. The participants could first take the test without background sound and then take it with the sound. Now, you would be comparing the two sets of scores to each other. So in this type of t-test, you are still comparing the means of two sets of scores, but the two sets of scores are not independent of each other.

·

**One-way ANOVA**: The t-tests listed above can only be used when you have two groups of scores. What if you have 3 or 4 (or more)? The one-way ANOVA is used whenever you have one independent variable with more than two groups. So the example of a researcher comparing concentration performance between participants who listen to white noise, rock music, or no noise would call for a one-way ANOVA. A one-way ANOVA can have more than three groups; the key is simply that there is one independent variable with more than two conditions.

·

**Factorial ANOVA**: Research designs can get much more complicated than simply having one independent variable. What if, for instance, you not only wanted to compare the effect of background noise on concentration performance, but you also wanted to test the effects of room brightness? So, you could have some participants complete the task with the lights in the room on and some complete the task with the lights off. Now you have a research situation with two independent variables (background noise and brightness). This might yield four groups (background noise; bright, background noise; dark, no background noise; bright, no background noise; dark), or it could yield even more (if you have more than two noise conditions or more than two brightness conditions).

The simple four-group example would be known as a 2 x 2 factorial ANOVA. It is a 2 x 2 because there are two separate independent variables, and each has two conditions (also known as levels). You could have a 2 x 3 factorial ANOVA (two independent variables, one with two conditions and one with three), a 3 x 3 factorial ANOVA (two independent variables, each with three conditions), or even a 2 x 3 x 4 factorial ANOVA (three independent variables, one with two conditions, one with three, and one with four). In fact, you can have any combination you can think of!

*A Word on Statistical Significance*

In this overview, we have discussed research designs that compare group means. The idea is that if the means of the groups differ, then there is evidence of an effect of the independent variable on the dependent variable. If you think about the nature of real data, however, group means are almost always going to differ, at least a little! Take an incredibly simple example. If you take a group of 10 males and test their optimism level using a questionnaire, and then also take a group of 10 females and test their optimism level, you can almost be sure that the mean of each group will not be exactly the same. Let’s say scores on the optimism scale can range from 1 to 100. What are the odds that the two groups would have exactly the same mean, not even differing by a fraction of a point? Certainly not very good!

So in essence, researchers are not just interested if group means differ. Instead, they are interested if the group means differ in a statistically meaningful way. Psychologists use what is known as hypothesis testing to determine if experimental results are statistically significant. The details of this process, and the precise way that statistical significance is determined, will be covered in more detail in PSY 520. For now, we can simply say that results are statistically significant when they are unlikely to be due to chance. In the optimism example above, chance alone tells you that the male and female group means are likely to differ. If, however, they differ by an amount that we would not expect to find just by chance, then we can say that the results are statistically significant. Researchers signify statistical significance using what is called a p value. In psychology research, if the p value of a statistical test is less than .05, the result is deemed to be statistically significant. A p value of .05 means that the result had a 5% chance or less of occurring randomly or by chance. Thus, the effect is likely to be statistically meaningful and representative of a “real” difference between the group means. Statistical significance is undoubtedly a confusing topic. Read Chapter 12 of Beginning Behavioral Research for more detail.