The conclusion of the hypothesis test can be right or wrong. Erroneous conclusions are classified as Type I or Type II.
Type I error or false positive occurs when the null hypothesis is rejected, even though it is actually true. There is no difference between the groups, contrary to the conclusion that a significant difference exists.
Type II error or false negative occurs when the null hypothesis is accepted, though it is actually false. The conclusion that there is no difference is incorrect.
An oft quoted example is of the jury system where the defendant is “innocent until proven guilty” (H0 = “not guilty”, HA = “guilty”). The jury’s decision whether the defendant is not guilty (accept H0), or guilty (reject H0), may be either right or wrong. Convicting the guilty or acquitting the innocent are correct decisions. However, convicting an innocent person is a Type I error, while acquitting a guilty person is a Type II error.
Though one type of error may sometimes be worse than the other, neither is desirable. Researchers and analysts contain the error rates by collecting more data or greater evidence, and by establishing decision norms or standards.
A trade-off however is required because adjusting the norm to reduce type I error results in the increase in type II error, and vice versa. Expressed in terms of the probability of making an error, the standards are summarized in Exhibit 33.20:
|No Difference |
|Accept H0||1 – α||β: Type II|
|Reject H0||α: Type I||1 – β: Power|
α: Probability of making a Type I error, also referred to as the significance level, is usually set at 0.05 or 5%, i.e., type I error occurs 5% of the time.
β: Probability of making a Type II error.
1 – β: Called power, is the probability of correctly rejecting the null hypothesis.
Power is the probability of correctly rejecting the null hypothesis, i.e., correctly concluding there was a difference. This usually relates to the objective of the study.
Power is dependent on three factors:
Power is usually set at 0.8 or 80%, which makes β (type II error) equal to 0.2.
Since both α and power (or β) are typically set according to norms, the size of a sample is essentially a function of the effect size, or the detectable difference. This is discussed further in Section Sample Size — Comparative Studies, in Chapter Sampling.
Use the Search Bar to find content on MarketingMind.
In an analytics-driven business environment, this analytics-centred consumer marketing workshop is tailored to the needs of consumer analysts, marketing researchers, brand managers, category managers and seasoned marketing and retailing professionals.
Unlock the Power of Digital Marketing: Join us for an immersive online experience designed to empower you with the skills and knowledge needed to excel in the dynamic world of digital marketing. In just three days, you will transform into a proficient digital marketer, equipped to craft and implement successful online strategies.