| Cart Total:
Menu
Learn About Quality

Statistical Methods in Quality Improvement


Statistics on ASQTV™

Quality Glossary Definition: Statistics

Statistics are defined as a field that involves tabulating, depicting, and describing data sets.

Statistical methods in quality improvement are defined as the use of collected data and quality standards to find new ways to improve products and services. They are a formalized body of techniques characteristically involving attempts to infer the properties of a large collection of data.

The use of statistical methods in quality improvement takes many forms, including:

Hypothesis Testing

Two hypotheses are evaluated: a null hypothesis (H0) and an alternative hypothesis (H1). The null hypothesis is a “straw man” used in a statistical test. The conclusion is to either reject or fail to reject the null hypothesis.

Regression Analysis

Determines a mathematical expression describing the functional relationship between one response and one or more independent variables.

Statistical Process Control (SPC)

Monitors, controls and improves processes through statistical techniques. SPC identifies when processes are out of control due to special cause variation (variation caused by special circumstances, not inherent to the process). Practitioners may then seek ways to remove that variation from the process.

Design and Analysis of Experiments

Planning, conducting, analyzing and interpreting controlled tests to evaluate the factors that may influence a response variable.

The practice of employing a small, representative sample to make an inference of a wider population originated in the early part of the 20th century. William S. Gosset, more commonly known by his pseudonym “Student,” was required to take small samples from a brewing process to understand particular quality characteristics. The statistical approach he derived (now called a one-sample t-test) was subsequently built upon by R. A. Fisher and others.

Jerzy Neyman and E. S. Pearson developed a more complete mathematical framework for hypothesis testing in the 1920s. This included now-familiar concepts to statisticians, such as:

  • Type I error—incorrectly rejecting the null hypothesis.
  • Type II errorincorrectly failing to reject the null hypothesis.
  • Statistical power—the probability of correctly rejecting the null hypothesis.

Fisher’s Analysis of Variance, or ANOVA, procedure provides the statistical engine through which many statistical analyses are conducted, as in Gage Repeatability and Reproducibility studies and other designed experiments. ANOVA has proven to be a very helpful tool to address how variation may be attributed to certain factors under consideration.

W. Edwards Deming and others have criticized the indiscriminate use of statistical inference procedures, noting that erroneous conclusions may be drawn unless one is sampling from a stable system. Consideration of the type of statistical study being performed should be a key concern when reviewing data.

Statistics Articles

Ford Team Project Builds Relationships, Improves Quality (PDF) 
When pre-launch reviews of the Ford Fiesta pointed to concerns about the quality of the vehicle's carpet, a team of Six Sigma Black Belts turned to design of experiments (DOE) to improve the carpet manufacturing process. The team redesigned the process in just two weeks, improving carpet quality and strengthening Ford's relationship with the supplier. 

Improving an Unstable Process (PDF)
By employing repeated small changes, evolutionary operations (EVOP) compels a process to produce information about itself, not just the product and monitoring data.

When to Use Fisher’s Exact Test (PDF)
Traditional methods used to assess differences between items such as operators or machines may produce misleading results if the number of observations obtained for analysis is small. R. A. Fisher’s exact test may provide a more appropriate analysis.

Teaching the Role of SPC in Industrial Statistics (PDF)
Conducting an analytical study on a process lacking statistical control is risky because the cause system becomes predictable only after it has been reduced to common causes.

Featured Advertisers