TRY THIS TODAY
A better approach to effectiveness monitoring
by Gilbert A. Cortes
Effectiveness monitoring (EM) is a critical step in the failure investigation (FI) process, and occurs after implementing specific actions to address all confirmed root causes of an issue or deviation.1
The purpose of EM is to provide objective evidence that the actions implemented have addressed the reported issue or deviation. FI practitioners, however, often struggle with the strategy for an EM plan.
A common approach for conducting an EM plan is to select a timeframe (for example, six months) to monitor the process after preventive actions have been implemented. This assumes the duration of the selected timeframe is sufficient to indicate whether the process changes will perform as intended. If no issues are detected during this timeframe, the practitioner can close the FI.
The drawback to this method, however, is there are several inherent weaknesses that may fail to indicate ineffective preventive actions or unnecessarily extend the EM period, both of which can negatively affect the customer and organization. Furthermore, this approach is unable to conclude with any level of confidence that the EM meets the predetermined criteria.
Sample size method
The better approach for conducting an EM plan is to determine the appropriate criteria for the preventive actions associated with the root causes of the issue or deviation, and establish a sample size based on the potential customer impact.
To calculate a statistically valid sample size, the practitioner needs certain information related to the process. For EM sample size requirements, see Online Table 1.
With this information, the practitioner can use a statistical analysis tool, such as Minitab,2 to calculate a statistically valid sample size representative of the process and determine the appropriate confidence statement consistent with the potential customer impact.
From the data in Online Table 1, a sample size of 233 is derived with a critical distance (k-value) of 2.90399. See Online Figure 1.
After 233 samples are collected and evaluated, a statistical analysis tool is used to perform an acceptance test using the k-value to make an accept-or-reject decision. Assuming the results pass, the practitioner would conclude, "Based on the results, we are 95% confident (1-beta) that the process percentage nonconforming is less than 0.4% (the rejectable quality limit)."
This approach also can be used to determine the level of confidence, or statistical significance, if a certain sample size is used. Using the same data in Online Table 1, for example, a sample size of 100 will only provide an 85.7% level of confidence, which is below the required level of statistical significance.
Advantages of the sample size method versus selecting an arbitrary timeframe include:
- Sample size is statistically valid.
- Sample is representative across all known sources of process variation.
- Confidence statement can be stated for EM performance.
- Potential customer impact is considered.
The sample size method minimizes the risk of not having enough data to properly conclude results in cases in which the process being monitored is producing limited units over time. Conversely, this approach also minimizes costs when large numbers of units are produced and resources are wasted monitoring and conducting testing beyond what is required. Ultimately, this approach reduces the risk to the customer and the cost of quality to the organization.
Reference and Note
- International Organization for Standardization (ISO), ISO 9001:2008—Quality management systems—requirements.
- For more information about Minitab software, visit www.minitab.com.
Gilbert A. Cortes is a quality engineer director at Johnson & Johnson in Miami. He earned a master’s degree in science from the University of Miami and an MBA from Florida International University in Miami. Gilbert is an ASQ member and an ASQ-certified quality auditor and engineer.