## 2020

STATISTICS SPOTLIGHT

### CONFIDENCE INTERVALS

# Close Enough?

## Understanding confidence intervals and how to use them more effectively

by Matthew Barsalou

I once explained to a colleague the concept of confidence intervals around the mean of a sample. To simplify the concept, and for training purposes only, I told my colleague that I live 30 km from work. I don’t believe I live 30.000 km from work. I strongly believe the true answer is about 30 km, but I can’t be 100% certain.

Even this simple model provided the opportunity to explain the concept of an operational definition because without one, it would be impossible to establish the distance between home and work. There are two possible routes—and “as the crow flies” would be less than either of the driving distances. In this case, I defined the true distance as, “The street address to street address distance for the route I drove today as given by a predesignated internet map.”

I explained to my colleague that I am 10% certain that I live 29-31 km away from work, 50% confidant that I live 25-35 km, 95% confidant that I live 20-40 km, and 99.9% confidant that I live 20-50 km away. This demonstrated that more confidence resulted in a wider interval. I may be 99.9% certain that I live 20-50 km away from work, but despite the high confidence, that estimate is not precise enough to be of much use. The drive could be from about 20 minutes up to 50 minutes.

I drew the sketch depicted in Figure 1 to show this concept. Confidence intervals have little to do with my drive to work, but the simplified model made the concept more intuitive and easier to explain before moving to a more realistic example.

A more technical explanation of a confidence interval is: “The region containing the limits or band of a parameter with an associated confidence level that the bounds are large enough to contain the true parameter value.”^{1} A confidence interval tells you the uncertainty in a statistic.^{2} Reporting a confidence interval—together with a sample statistic—provides more information such as when stating, “The sample mean is 17.584 with a 95% confidence interval from 17.114 to 18.054.” The mean sounds much more precise than the confidence interval indicates it is.

### Confidence intervals for means

Having explained the basic concept with an overly simplified model, let’s move to a more realistic example by explaining how to reduce the range of the confidence interval while maintaining the high level of confidence. A confidence interval for a one-sample Z test with a known standard deviation and sample size greater than 30 from a normally distribute population is calculated using the formula:

*x̄* ± *Z*_{α/2} * (σ ÷ √n).

Suppose you had a sample size of 30 with a mean of 50 and a standard deviation of 0.5. The confidence interval would be calculated as:

50 - 1.96 * (0.5 ÷ √30) = 49.82 and 50 + 1.96 * (0.5 ÷ √30) = 50.18,

resulting in a 95% confidence interval of 49.82 to 50.18. Increasing the sample size to 300 while holding the mean and standard deviation constant would result in a reduced range for the confidence interval:

50 - 1.96 * (0.5 ÷ √300) = 49.94 and 50 + 1.96 * (0.5 ÷ √300) = 50.06.

The new 95% confidence interval with an increased sample size is 49.94 to 50.06. As this example demonstrated, you can reduce the width of the confidence interval by increasing the sample size for a given alpha when the mean and standard deviation are held constant (see Figure 2).

If the sample sizes are 30 or less, the t-distribution is used, and the formula is:

*x* ± *t*_{α/2} * (s ÷ √*n*) with degrees of freedom equal to n - 1.

### Confidence intervals for a proportion

The confidence interval for a proportion can be determined by the formula for the normal approximation when there are either five or more occurrences and nonoccurrences. The formula is:

*p̂* ± *Z*_{α/2} * √(*p̂*(1-*p̂*) ÷ *n*).

### Confidence intervals for standard deviation

The confidence interval for a standard deviation is calculated with two formulas to ensure the lower confidence interval does not contain a negative number. The data must come from a population that is normally distributed. The formula for the lower confidence interval is:

√((*n – s*)*s*^{2} / x^{2}(α/2))

and the upper confidence interval is:

√((*n – s*)*s*^{2} / x^{2}(1 – α/2))

### Confidence intervals for capability studies

Confidence intervals can and should be calculated for capability indexes. The Automotive Industry Action Group recommends a minimum of 125 samples from subgroups of size five.^{3} In industry, there often is less than the required number of samples available, but a C_{p} and C_{pk} still can be calculated and, even worse, reported to management or the customer.

Figure 3 shows the values used and the results of three capabilities studies with sample sizes of 30, 125 and 1,000, respectively. The study using 30 values has a C_{pk} of 1.19 and it could use some improvement based on this value. So, the true story could be much worse.

Notice the confidence interval ranges from a nice 1.54 down to an awful 0.83. Simply reporting the C_{pk} value without a confidence interval may not provide an accurate picture of the situation. Even the study using 125 vales has a confidence interval with a range from 1.00 up to 1.34. The sample using 1,000 values has a confidence interval of 1.2 to 1.33. There is still is a range here, but it is significantly smaller than the range for the other confidence intervals.

After seeing a high C_{pk} value, you may mistakenly conclude the process is not in need of improvement without realizing the actual C_{pk} is much lower than you calculated due to a small sample size. Ideally, all capability studies will use sample sizes of 1,000 or greater, but that is unrealistic. Often, it can be a struggle just to get 125 values. Reporting capability indexes with a confidence interval shows how confident you can (or can’t) be in the value you’re reporting.

Confidence intervals for capability indexes can be calculated using software or by hand. The lower bound for a confidence interval for C_{p} is:

and the upper bound for C_{p} is:

in which the pooled standard deviation v is equal to

∑ (*n*_{i} – 1)

and approximately:

0.9*k* (*n* – 1)

when using the Rbar method.

### How close are you?

A statistic often is used to reach a conclusion regarding population based on the study of a sample. Using a confidence interval can help you judge how close the sample statistic comes to the actual population parameter with a degree of statistical confidence.

As the old saying goes, “It’s not what you don’t know that gets you into trouble, it’s what you know that just ain’t so that gets you into trouble.” Using confidence intervals helps you understand how well you know what you think you know.

### References

- Forrest W. Breyfogle, "
*Implementing Six Sigma: Smarter Solutions Using Statistical Methods*, second edition, John Wiley and Sons, 2003. - Roger W. Hoerl and Ronald D. Snee,
*Statistical Thinking: Improving Business Performance*, second edition, John Wiley and Sons, 2012 - DaimlerChrysler Corp., Ford Motor Co. and General Motors Corp.,
*Statistical Process Control (SPC Reference Manual),*DaimlerChrysler, Ford, GMC, second edition, 2005.

**Matthew Barsalou **is an extramural researcher at Poznan University of Technology in Germany. He has a master’s degree in business administration and engineering from Wilhelm Büchner Hochschule in Darmstadt, Germany, and a master’s degree in liberal studies from Fort Hays State University in Hays, KS. Barsalou is an ASQ senior member and holds several certifications.

Featured advertisers

Thank you Matthew!

Bite-sized and real-life-oriented, as always: another fantastic case study for Quality practitioners' personal go-to/reference library!

Alexander Kholodov,

ASQ CQE

--Alexander Kholodov, 10-02-2019