# Analysis of variance/Follow-up tests

Subject classification: this is a mathematics resource. |

## Presentation edit

There are two types of follow-up tests: **planned contrasts** (when you have hypothesised specific group comparisions) and **post hoc tests** (when you haven't hypothesised specific differences - tests all pairs of groups).

To learn more about conducting follow-up tests for ANOVA, consult:

- Allen & Bennett (SPSS for the health and behavioural sciences):
- Chapter 7.3.3 Follow up analyses (One-way ANOVA Example 1)
- Chapter 7.4.3 Follow up analyses (One-way ANOVA Example 2)
- Chapter 8.4.2 Follow up analyses (Factor between groups ANOVA Example 2)
- Chapter 9.4.3 Follow up analyses (One-way repeated measures ANOVA)

- Howell (Fundamental Statistics):
- Section 16.5: Multiple comparison procedures (375-383)

- Howell (Statistical Methods): Chapter 12: Multiple comparison among treatment means (343-389)
- Francis (Introduction to SPSS for Windows):
- Section 3.3.6.1: Post hoc tests and planned contrasts (61-63)

- Francis (Introduction to SPSS for Windows):
- Section 3.3.8.4: Planned contrasts for within subjects ANOVA (71-71)

## Planned contrasts edit

Technically, planned tests (or the use of planned contrasts) are not "follow-up" tests that are done following a 'significant' omnibus *F* value from an Anova. Planned *t*-tests can be conducted **instead of** an Anova (or even notwithstanding a 'non-significant' Anova *F* value) by virtue of their having been planned prior to collecting the data in that experiment or study. A full complement of planned contrasts will consist of one less than the number of means in the study, and they should all be at least linearly independent of one another (if they aren't mutually orthogonal - a stronger form of linear independence). Two other procedures that are appropriately used with planned contrasts (besides *t*-tests) are Dunnett's many-one method and Bonferroni's inequality.

Regardless of the number of means (say, J) in the study, the J-1 planned contrasts collectively have a df1 (numerator degrees of freedom) value of 1, and these are the ONLY contrasts that can be evaluated for those J means. The important advantage offered by *post hoc* procedures is that whenever there are more than two means in a study, a potentially infinite number of contrasts can be created and evaluated (so pairwise comparisons typically just scratch the surface of the contrasts that are possible). The method of planned *t*-tests uses a decision-based (per contrast) error rate (as does Rodger's *post hoc* method). The Bonferroni and Dunnett procedures use an experiment-wise error rate.

## Commonly used *post hoc* tests
edit

For a factorial ANOVA, if you get a significant *F* for an IV which has more then 2 groups and you had made no hypotheses, then your main options are to followup with *post hoc* tests, choosing among:

- Fisher's Least Significant Difference (LSD) (or protected
*t*test)) - Newman-Keuls multiple range test
- Tukey's test (or Tukey's Honestly Significant Difference (HSD)):
- Particularly useful for comparing groups of unequal cell sizes

- Scheffé's method

In order to get these analyses:

SPSS

- > Analyze
- > General Linear Model
- > Univariate
- > Insert DV and Fixed Factors (IV)
- >
*Post hoc*- > Insert Factors for
*post hoc*analysis- > Tick the boxes for the
*post hoc*tests you want ---> OK

- > Tick the boxes for the

- > Insert Factors for

- >

- > Insert DV and Fixed Factors (IV)

- > Univariate

- > General Linear Model

You only need to report one set of *post hoc* analyses.

Once you get the results, interpretation is pretty straightforward, because you will have a series of comparison tests between each pair of means, showing either significant or non-significant differences.

## A statistically more powerful alternative edit

A *post hoc* procedure that is more powerful than all of the *post hoc* procedures mentioned above is Rodger's method. It can't currently be used with SPSS, but the free, Windows-based program Simple, Powerful Statistics (SPS) which is available from an external link on the Wikiversity page on Rodger's method (link above) makes it relatively easy to use this procedure with independent or correlated means, proportions, or ranks. In addition to its increased power, Rodger's method also offers unlimited *post hoc* data analysis that is accompanied by a guarantee that the long-run expectation of type 1 errors when using this procedure can never be greater than Eα (i.e., .05 or .01). Both the increased power that Rodger's method possesses (as contrasted with other *post hoc* procedures), and the impossibility of type 1 error rate inflation, are the result of using a decision-based error rate (as is also used with the method of planned *t*-tests).

## See also edit

- ANOVA follow-up tests (Wikipedia)