The conclusion of the hypothesis test can be right or wrong. Erroneous conclusions are classified as Type I or Type II.

Type I error or false positive occurs when the null hypothesis is rejected, even though it is actually true. There is no difference between the groups, contrary to the conclusion that a significant difference exists.

Type II error or false negative occurs when the null hypothesis is accepted, though it is actually false. The conclusion that there is no difference is incorrect.

An oft quoted example is of the jury system where the defendant is “innocent until proven guilty”
(H_{0} = “not guilty”, H_{A} = “guilty”). The jury’s decision whether the defendant is not guilty (accept H_{0}), or guilty (reject H_{0}),
may be either right or wrong. Convicting the guilty or acquitting the innocent are correct decisions. However, convicting an
innocent person is a Type I error, while acquitting a guilty person is a Type II error.

Though one type of error may sometimes be worse than the other, neither is desirable. Researchers and analysts contain the error rates by collecting more data or greater evidence, and by establishing decision norms or standards.

A trade-off however is required because adjusting the norm to reduce type I error results in the increase in type II error, and vice versa. Expressed in terms of the probability of making an error, the standards are summarized in Exhibit 33.19:

Truth | ||

No Difference (H _{0} true) | Difference (H _{0} false) | |

Accept H_{0} | 1 – α | β: Type II |

Reject H_{0} | α: Type I | 1 – β: Power |

*α*: Probability of making a Type I error, also referred to as the significance level, is
usually set at 0.05 or 5%, i.e., type I error occurs 5% of the time.

*β*: Probability of making a Type II error.

*1 – β*: Called *power*, is the probability of correctly rejecting the null hypothesis.

*Power* is the probability of correctly rejecting the null hypothesis, i.e.,
correctly concluding there was a difference. This usually relates to the objective of the study.

Power is dependent on three factors:

*Type I error (α) or significance level*: Power decreases with decrease in significance level. The norm for quantitative studies is α = 5%.*Effect size*(Δ): The magnitude of the “signal”, or the amount of difference between the parameters of interest. This is specified in terms of standard deviations, i.e., Δ=1 pertains to a difference of 1 standard deviation.*Sample size*: Power increases with sample size. While very small samples make statistical tests overly sensitive, very large samples make them insensitive. With excessively large samples, even very small effects can be statistically significant, which raises the issue of practical significance vs. statistical significance.

Power is usually set at 0.8 or 80%, which makes β (type II error) equal to 0.2.

Since both α and power (or β) are typically set according to norms, the size of a sample is essentially
a function of the effect size, or the detectable difference. This is discussed further in Section
*Sample Size — Comparative Studies*, in Chapter *Sampling*.

*Use the Search Bar to find content on MarketingMind.*

In an analytics-driven business environment, this analytics-centred consumer marketing workshop is tailored to the needs of consumer analysts, marketing researchers, brand managers, category managers and seasoned marketing and retailing professionals.

Is marketing education fluffy too?

Marketing simulators impart much needed combat experiences, equipping practitioners with the skills to succeed in the consumer market battleground. They combine theory with practice, linking the classroom with the consumer marketplace.