Quality Metrics and their Reporting

Quality Metrics and their Reporting

Organizations need quality metrics; however, the question is: what data quality metrics should an organization monitor, and how should these metrics be tracked and reported?


This article provides a methodology to answer these questions, “smartly.”


Dictionary.com defines quality as an essential or distinctive characteristic, property, or attribute. The details for selecting and monitoring data quality metrics in an organization so that the enterprise as a whole benefits from this tracking and reporting are provided in this article.


Characteristics of a Good Metric and its Reporting

Data quality metrics should have the features of a good metric.


We have all heard the clichés:


  • Garbage in, garbage out.
  • You get what you measure.
  • You cannot improve what you can’t measure.
  • If you don’t measure it, it’s just a hobby.
  • What you measure is what you get.
  • If you don’t measure it, you cannot manage it.
  • Let me know how I will be measured, and I’ll tell you how I will perform.



These clichés are true! Measurements must be the processes’ eyes that stimulate the most appropriate behavior. Measures need to provide an unbiased process performance assessment. There is little hope for improvement when process output performance is not seen accurately and reported relative to the desired result. Generic measures for any process are quality, cost, and delivery. Most operations need a balance measurement set to prevent optimizing one metric at the expense of overall process health. Metrics can also drive the wrong behavior if conducted outside the general enterprise needs. (Wells Fargo creation of fake accounts some time ago is an example of metrics goal setting leading to bad behaviors .) Adding a people measure assures a balance between task and people management when appropriate.


As an illustration, consider the most recent customer satisfaction survey form you received. Do you think a summary of responses from this survey accurately assesses what you experienced in your purchase process? I guess that your answer is no. The wording of surveys is often so that the responses will be satisfactory but don’t provide insight into what happens in a process.


Writing an effective survey and then evaluating the responses is not easy. What we would like to receive from a survey is an honest picture of what is currently happening in the process, along with providing improvement direction. A comment section in a hotel guest survey might give insight into a specific actionable issue or improvement possibility.


Good metrics provide decision-making insight that leads to the most appropriate conclusion and action or non-action. The objective is to create a measurable, auditable, sustainable, and consistent entity. Effective and reliable metrics should have/provide (Breyfogle 2008 ):


  • Business alignment
  • Honest assessment
  • Consistency
  • Have repeatability and reproducibility
  • Actionability
  • Time-series tracking
  • Predictability
  • Peer comparability



Metric utilization requires commitment and resource allotments; hence, it is essential to do it right. Organizations must avoid measurement design and usage errors when striving to become more metric-driven. Common mistakes include:


  • Creating metrics for the sake of metrics. Lloyd S. Nelson, director of Statistical Methods for the Nashua Corporation, stated: “The most important figures needed for management of any organization are unknown or unknowable” (Deming 1986 ).
  • Formulating too many metrics results in no actions
  • Lacking metric follow-up
  • Describing metrics that do not result in the intended action
  • Creating metrics that can have emotional manipulation



If not exercised effectively, metrics can become a dark force where bad stuff absorbs good energy- a black hole where good resources are lost.


Selection of Data Quality Metrics to Report


The reporting of data quality metrics should be more than just a number reported periodically (e.g., monthly) but instead a series of overtime measurements that are important to both the customer and the organization. Reporting these metrics needs to be transparent and honest, where there is no fear of any negative consequences if someone provides “bad news,” and there is an owner for each metric. ??Organizations can achieve this objective if:


  • The organization automatically updates the numbers (e.g., daily)
  • The performance reporting is from a process output point of view, i.e., instead of focusing on reporting how well individuals/departments met numerical goals for each month



Predictive performance metric reporting is what a free 30,000-foot-level free reporting app (Breyfogle 2014, 2021) provides to quality metrics and other business measurements. With a 30,000-foot-level form of reporting , one might notice that organizational reaction to not meeting a monthly target has led to firefighting common-cause variability as though it were special causes. With a traditional management by objective (MBO) approach of monthly metric goal setting, one might notice for a specific 30,000 foot-level of measurement that nothing has changed for the last 16 months. Over this time, there has been an approximate 12% non-compliant rate for achieving a monthly targeted goal. One could also expect this rate to continue unless there was an improvement in the process or processes impacting this metric.


One way to determine what quality metrics to report in an organization with a 30,000-foot-level report-out format is to ask the following question in a team environment: what is essential to you as a customer of a product or service you purchased? This list should provide insight into what high-level metrics should not only track but also where to focus process quality-measurement improvement efforts.


A list of items to consider for the initiation of this discussion is:


  • The return rate of products from the customer
  • On-time customer delivery
  • Product shipment errors
  • Product lead-time to customer
  • Internal rework rate before shipment
  • Customer satisfaction
  • Warranty claims
  • Product dimensions/performance relative to specifications
  • Quality non-conformance costs



After creating this list, one should determine how to report (e.g., monthly over many years) the metric from a 30,000-foot-level perspective . For example,


  • The return rate of products from the customer: report the proportion of returns divided by the number of products shipped. A Pareto chart of the reason for returns would be helpful to gain insight into what to do to improve the process so that this return rate decreases in the future.
  • On-time customer delivery:? It is preferable to track this metric at a 30,000-foot level using a continuous response when assessing process stability and then noting the performance from the latest stability region as the proportion of deliveries that were not on time. One would expect the future performance of this non-on-time arrival rate to be similar unless there was a different execution of the process.
  • Internal rework rate before shipment: This “hidden factory” metric could be reported weekly as the frequency of occurrence.
  • Customer satisfaction: In addition to a Likert scale of 1-5 relative to customer satisfaction, one could ask the customer if they would recommend the product they had purchased to someone else. An organization could use a scale of 1-10 for this assessment, where ten is “definitely would recommend” the product/service. An organization could track the frequency of unsatisfactory/satisfactory responses to these questions monthly. Evaluating survey complaints can provide insight into improvement opportunities.
  • Quality costs: A traditional “Cost of Quality” quantifies costs in prevention, appraisal, and internal and external failure; however, this conventional assessment can be very labor-intensive. Because of this, an organization may conduct this assessment only once. As an alternative, one could consider these areas but do a monthly sampling to estimate and document in a spreadsheet the implication of all metrics that are chosen relative to quality, including other metrics that might not have a traditional quality cost consideration, such as on-time delivery.?? This total reported quality cost metric could combine various quality costs to provide one 30,000-foot-level report-out metric. Understanding how to improve a process is often achieved by drilling down to more specific area costs.



Quality Metric with Demonstrated Improvement


A 30,000-foot-level quality report-out is not simply a traditional control chart. This high-level performance scorecard reporting will take different formats depending on the data type. Still, in every case, there will be a statement at the bottom of the chart about whether the process output is predictable. A prediction statement is included at the bottom of the 30,000-foot-level chart for predictable processes. As the figure below illustrates, the process is staged when a demonstrated improvement occurs. For this charted attribute response, the three latest plotted points indicate an improvement in process performance, i.e., lower response magnitude.



This report-out was created using a free 30,000-foot-level reporting app .


Summary


Highlights from the above-described methodology for quality metric reporting are:


  • Select measurements that are important to your customer and the business.
  • Report quality metrics from a 30,000-foot-level process point of view to undertake process improvement efforts when an undesirable common-cause response exists.
  • Provide a reporting system so there is transparency and up-to-date, accurate information available through a “click of the mouse” by all authorized throughout the business. A 2023 published patent describes how to address this quality-metric reporting desire . (Contact us at [email protected] to discuss how you could use and benefit from the patent-described methods.)
  • Reporting system needs to encourage open data entry and discourage “fudging the numbers” to make things look better than they are.
  • Link quality metric reporting to the functional processes that created the responses.
  • Assign ownership to each 30,000-foot-level reported quality metric; i.e., the quality department does not own all quality metrics.



References





For additional information about quality metrics, see the article in the referenced link below.

ASQ Quality Progress August 2023 published article titled “Meaningful Metrics: Deciding which quality metrics to monitor and how to report them” by Forrest Breyfogle describes the creation of metrics reporting from a high-level point of view that leads to long-lasting process improvements. Downloaded this article through the following link.



Download



Next Step

To see the application and benefits of 30,000-foot-level quality metrics reporting to your data, contact Forrest Breyfogle [email protected] . You can schedule a video meeting session with Forrest through the link https://smartersolutions.com/schedule-zoom-session/ if you want to discuss the described methodology application or have any questions.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了