Skip to Main Content

We have a new app!

Take the Access library with you wherever you go—easy access to books, videos, images, podcasts, personalized features, and more.

Download the Access App here: iOS and Android. Learn more here!


In the previous chapter, statistics were presented that can be used to summarize and describe data. Descriptive procedures are not sufficient, however, for testing theories about the effects of experimental treatments, for exploring relationships among variables, or for generalizing the behavior of samples to a population. For these purposes, researchers use a process of statistical inference.

Inferential statistics involve a decision-making process that allows us to estimate unknown population characteristics from sample data. The success of this process requires that we make certain assumptions about how well a sample represents the larger population. These assumptions are based on two important concepts of statistical reasoning: probability and sampling error. The purpose of this chapter is to demonstrate the application of these concepts for drawing valid conclusions from research data. These principles will be applied across several statistical procedures in future chapters.


Probability is a complex but essential concept for understanding inferential statistics. We all have some notion of what probability means, as evidenced by the use of terms such as “likely,” “probably,” or “a good chance.” We use probability as a means of prediction: “There is a 50% chance of rain tomorrow,” or “This operation has a 75% chance of success.”

Statistically, we can view probability as a system of rules for analyzing a complete set of possible outcomes. For instance, when flipping a coin, there are two possible outcomes. When tossing a die, there are six possible outcomes. An event is a single observable outcome, such as the appearance of tails or a 3 on the toss of a die.

Probability is the likelihood that any one event will occur, given all the possible outcomes.

We use a lowercase p to signify probability, expressed as a ratio or decimal. For example, the likelihood of getting tails on any single coin flip will be 1 out of 2, or 1/2, or .5. Therefore, we say that the probability of getting tails is 50%, or p = .5. The probability that we will roll a 3 on one roll of a die is 1/6, or p = .167. Conversely, the probability that we will not roll a 3 is 5/6, or p = .833. These probabilities are based on the assumption that the coin or die is unbiased. Therefore, the outcomes are a matter of chance, representing random events.

For an event that is certain to occur, p = 1.00. For instance, if we toss a die, the probability of rolling a 3 or not rolling a 3 is 1.00 (p = .167 + .833). These two events are mutually exclusive and complementary events because they cannot occur together and they represent all possible outcomes. Therefore, the sum of their probabilities will always equal 1.00. We can also show that the probability of an impossible event is zero. For ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.