One easily confused relationship is between Type I and Type II errors.
A Type I error is when we reject the Null Hypothesis when in fact their was no effect (The Null Hypothesis always states that there is no relationship or no effect). That is, we conclude we found something when we actually did not.
A Type II error is when we fail to reject the null hypothesis when an effect actually exists. That is, we should reject the null hypothesis, but we don’t. We miss an opportunity to find an effect.
The table below is helpful to see the errors:
We can directly affect the Type I error rate by changing the alpha level. That is, we can reduce our chances of making a Type I error by reducing our alpha level (thus, we will be more conservative with our decisions to reject the null hypothesis).
We can indirectly affect Type II error rate by changing alpha. By raising alpha, we reduce the chance of Type II error. Type II errors are also affected by sample size (larger sample sizes reduce Type II error) and by effect size (the larger the effect the easier it is to detect a difference in effect).
To summarize, Type II error is affected by:
1) Alpha Level – Raising alpha lowers the chance of making a Type II error but raises the chances of making a Type I error
2) Sample Size – A larger sample gives us more of a chance (or more power) to detect an effect, thus reducing our chances of a Type II error
3) Effect Size – A larger effect size also lowers the chances of making a Type II error. That is, the more things are different, the easier it is for us to find that difference.
This site has a great visual display of how Type II error is affected by effect size