Skip to Main Content

INTRODUCTION

Many statistical procedures, like the t-test, analysis of variance and linear regression are based on assumptions about homogeneity of variance and normality that should be met to ensure the validity of the test. Although most parametric statistical procedures are considered robust to moderate violations of these assumptions, some modification to the analysis is usually necessary with striking departures. When this occurs, the researcher can choose one of two approaches to accommodate the analysis. The analytic procedure can be modified, by using nonparametric statistics or nonlinear regression, or the dependent variable, X, can be transformed to a new variable, X', which more closely satisfies the necessary assumptions. The new variable is created by changing the scale of measurement for X. In this appendix we introduce five approaches to data transformation.

The three most common reasons for using data transformation are to satisfy the assumption of homogeneity of variance, to conform data to a normal distribution, and to create a more linear distribution that will fit the linear regression model. Fortunately, the same transformation will often accomplish more than one of these goals.1

The most commonly used transformations are the square root transformation, the square transformation, the log transformation, the reciprocal transformation, and the arc sine transformation. The choice of which method to use will depend on characteristics of the data. Before we describe the guidelines for using each of these approaches, it may be helpful to illustrate the transformation process using the square root transformation.

The square root transformation image replaces each score in a distribution with its square root. This method is most appropriate when variances are roughly proportional to group means, that is, when image is similar for all samples. The square root transformation will typically have the effect of equalizing variances.

Suppose we were given two sample distributions shown on the left panel in Table D.1. These variances, s2A = 8.5 and s2B = 26.5, are obviously quite different from one another. We determine the applicability of the square root transformation by demonstrating that image is similar for both distributions: image and image.

TABLE D.1EFFECT OF SQUARE ROOT TRANSFORMATION

Each score in both distributions is transformed to its square root on the right in Table D.l. As we can see, the effect of this transformation is a reduction in the discrepancy between the two variances; ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.