As knowledge and clinical theory have developed, clinical researchers have proposed more complex research questions, necessitating the use of elaborate multilevel and multifactor experimental designs. The analysis of variance (ANOVA) is a powerful analytic tool for analyzing such designs, where three or more conditions or groups are compared. The analysis of variance is used to determine if the observed differences among a set of means are greater than would be expected by chance alone. The ANOVA is based on the F statistic, which is similar to t in that it is a ratio of between-groups treatment effects to within-group variability. The test can be applied to independent groups or repeated measures designs.∗
The purpose of this chapter is to describe the application of the analysis of variance for a variety of experimental research designs. An introduction to the basic concepts underlying analysis of variance is most easily addressed in the context of a single-factor experiment (one independent variable) with independent groups. We then follow with discussions of more complex models, including factorial designs and repeated measures designs.
ANALYSIS OF VARIANCE FOR INDEPENDENT SAMPLES: ONE-WAY CLASSIFICATION
In a single-factor experiment, the one-way analysis of variance is applied when three or more independent group means are compared. The descriptor "one-way" indicates that the design involves one independent variable, or factor, with three or more levels.
Although the ANOVA can be applied to two-group comparisons, the t-test is generally considered more efficient for that purpose.†
The null hypothesis for a one-way multilevel study states that there is no significant difference among the group means, represented by
where k is the number of groups or levels of the independent variable. The alternative hypothesis (H1) states that at least two means will differ.