How to reduce QC costs

At many clients, we see a large potential for QC cost reductions. Some of the main potential saving areas are shown below including how to harvest. If it needs more clarification do not hesitate to contact us.


Area 1: Measuring only the needed attributes.

Clients running the old validation concept of passing validation by making 3 batches that were all good (predicting the past), have realized this is not a guarantee for all future batches will be good (predicting the future). To compensate for this, a costly QC inspection has often been implemented in batch release.

By changing to a more modern validation concept, where it is with prediction intervals proven that all future batches will be good, the QC situation is completely different. If we have proved that all future batches will be good and we can maintain the validated state there is no longer need for a high level of QC testing

In addition, we often see clients measuring many correlated quality attributes in batch release. By using this correlation pattern, the number of needed attributes to be measured can be heavily reduced. Many of them can be predicted by the others.

Area 2: Ensure quality attributes have a good performance.

Instead of using ISO standards like ISO3951 for batch release, we strongly recommend proving with confidence that quality is acceptable, e.g. in form of a prediction interval is within specifications. Then the needed sample size n is inversely proportional to the gap between true and acceptable performance. If you make this gap twice as big by improving performance, sample size is reduced by a factor 4. The sample size benefit of having good performance is much smaller following the ISO standards.

Area 3: Ensure tolerances are as wide as possible.

Often, we see clients releasing batches even though some measurements are outside specifications. This is a clear sign that tolerances are set too narrow. Many tolerances are set based on what a client was able to do at a given time, NOT what is needed. By doing a tolerance stack-up analysis, tolerances can be set based on need, which often can widen tolerances. This makes the gap between true and acceptable performance larger, leading to lower sample sizes as under Area 2. There is often a clear mismatch between the short time being used to set tolerances, compared to the big effort being used later to live up to the tolerances.

Area 4: Ensure measurement precision matches the tolerances.

If measurement precision is bad, the measured process with (blue) can be much wider than the true process width (red). The needed sample size is proportional to the width squared, so if the measurement system doubles the width, it will make the needed sample size 4 times larger.  We often see validated measurement systems that doubles the width. On top of leading to larger sample sizes it can also lead to Out of Specification results on good batches causing unnecessary scrap and/or Non-Conformity handling.

Area 5: Data analysis of historical QC data and evaluation of sampling plans.

By analyzing historical QC data, the gap between measured and acceptable performance can be found and thereby the necessary sample size found to either:

1.      prove with confidence that quality for each batch is acceptable

2.      see that the measured performance in not contradictory to the model build on validation data

1. is typically done when validation has been performed the old way (legacy products). Here it can be advantageous to pool within batch variance across batches to minimize sample size. This can easily be done using prediction intervals. If supplemented with sequential sampling (measure until you have enough) you get the most cost-efficient sampling scheme. Sample size rationale is built in to the method when releasing with confidence.

2. is typically done in combination with the new validation approach where all future batches are proven to be good. The rational for sample size can be based on having a good sampling error precision ratio of e.g. 0.1.

Area 6: Resource planning.

Finally, the QC workflow shall be optimized. This can be done by an Operational Workflow Diagram (OFD) which is a proven method to ensure full alignment of all stakeholders. Understanding the laboratory workflow allows us to challenge critical processes and base our assessments and recommendations for optimisations on factual data, capabilities and capacities.

Thanks for sharing. This is very useful.

回复
JIGNESH PADIA

Strategic Transformational Projects & Programs | Data Governance | ERM | Lean - Six Sigma | Certified - DCAM, Prosci, CRM, CMQ/OE, Six Sigma Black Belt

6 年

You need to learn about CTQs - what client needs is what the specifications are. A job of process specialist is to design a process to meet client’s need.

回复
Christina Christensen M?ller

High skilled Quality Assurance Manager at Banedanmark

6 年

Youre so right! If it dosen’t make sence, change it or delete it! So the plan must be: what do we want and how to get there in the smoothes way??

回复
Hans L?ss?e

Take take - i.e., take chances intelligently

6 年

Now we can take this further. The exact same approach can be applied for decision making, strategy development etc. Here it is normally names strategic risk management rather than quality control … but "potatoes potatoes".

回复
Carsten Lund

Business Owner at EpsilonPlus

6 年

So very true! Too many concerns about sometimes non-existing risk, reluctance to change procedures, no time to review "Historical" data. There is so much time to save here.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了