SAI Member-in-Training (MIT) National Practice Exam

Session length

1 / 20

What is a Type I vs Type II error?

Type I is false positive; Type II is false negative.

In hypothesis testing, you decide whether to reject the null hypothesis, and there are two distinct errors you can make. A Type I error happens when you reject a true null hypothesis—a false alarm. A Type II error occurs when you fail to reject a false null hypothesis—you miss a real effect. So the best way to summarize is that a Type I error is a false positive, while a Type II error is a false negative.

This distinction matters in practice. The chance of making a Type I error is controlled by the significance level you choose (alpha). The chance of a Type II error depends on the true effect size, the sample size, and the study design; power (1 minus beta) reflects how likely you are to detect a real effect. If you tighten alpha to reduce false positives, you can increase the risk of missing real effects unless you increase the sample size or improve the study design. These ideas come from frequentist hypothesis testing, and while related decision concepts exist in Bayesian contexts, the explicit Type I/II labeling is a hallmark of the traditional framework.

For example, testing a drug with no real effect but concluding it works is a Type I error; testing a drug that truly works but concluding it doesn’t is a Type II error.

Type I is false negative; Type II is false positive.

They are the same error.

They occur only in Bayesian tests.

Next Question
Subscribe

Get the latest from Passetra

You can unsubscribe at any time. Read our privacy policy