How do users measure the value of your data? Article No. 2

How do users measure the value of your data? Article No. 2

Traditional contracted operations approaches generate reems of data that are presumed to be valuable to the user based on logic and common sense. But is your data used often or well by users? ?Do you know who your data users are and what their perspective is of your data? How much of what you produce is heavily used vs. ignored?

In short, how much of your [team’s] effort in data creation is waste? And if significant, how could you best adjust that investment to drive real operations transformation?

In our first article looking at the issue of how our contract operations data should be designed for maximum impact, we learned we can't assume that our data is value. That is, the data produced by what we in the contract operations world, data produced by VMO’s, Supply Chain, Procurement or Share Services Organizations. We are considering being more skeptical because of a UN Report on National Statistics Offices [NSOs] production practices regarding data collected globally by NSOs for COVID response and their susequent questioning of their own data's value. This NSO data was previously presumed to be forever valuable to the public by its very nature. That value was called into question when key characteristics of that data and how the NSO organizations behaved didn’t align with changing views of that data during the COVID crisis.

These changing users and user perspectives changed the value of what NSOs produced. This change became an added threat to COVID planning and response, which impacted policy creation and adoption and ultimately the impact of public health efforts during the COVID pandemic.

Implications of COVID data for Contract Operations Data

Firms don’t have crises on a COVID scale every day, but they do happen. The life of many firms tends towards decline or even extinction on a sometimes yearly basis. A new product or service emerges from a competitor. Access to employee talent becomes difficult to attain or maintain. Changes inside or outside the firm evolve and the health or even life of the firm is threated. Or a pandemic occurs. As the nature of how contracted work is done and delivered, and as change continues to evolve rapidly in markets, externally within suppliers as well as inside client firms, how have the users and their perceptions of our data evolved? Have new important users or groups with different attitudes and assumptions been added to our environment? How have their perspectives impacted our value proposition?

And most importantly, how must our data about contracted operations performance evolve to stay relevant, useful and insightful to options development, risk insights, and performance planning efforts etc.? Is the changing environment changing the perceived relevance of the very data WE produce. If we don’t think so, why?

UN Task Force Focused on Measuring Data’s Value

Let's recap our conclusions.

Last week we discovered that the perceived value of the COVID statistical data changed during the pandemic. And the NSOs didn’t change fast enough as public sentiments about the data evolved. Because these organizations were not designed for change, risk was injected into the pandemic response that could have been mitigated, had these organizations been more aware of user perspectives about the data they were producing.

So was what their original (and flawed) mindset?

That a data’s value is driven by the nature of the data itself.?This is typically called the intrinsic value. The value is apparent from its source or its nature given the context of current stakeholders, relationship dynamics, decision making requirements, known risks and stated objectives etc.

If intrinsic value where the only meaninful measure of value, this mindset of focusing on data quality, wins the day for maximizing the value of data. If you produce data about infection rates and other known artifacts, then producing good quality data is all you need to worry about. People will consume it, if they can rely on the data's quality, you have achieved your mission.

Is the world of statistics, this is a core discipline so the measures of quality are known and parts of all NSO creeds. The Task Force report restated the more commonly accepted quality dimensions of data which have driven the missions of public health reporting organizations for decades.

Known Data Quality Measures

Relevance: the extent to which the statistics satisfy the needs of the users.

Accuracy: the closeness of estimates to the exact or true values that the statistics were intended to measure.

Reliability: the closeness of the initially estimated value(s) to the subsequent estimated value(s) if preliminary figures are disseminated.

Timeliness: the length of time between the end of a reference period (or date) and the dissemination of the statistics.

Punctuality: the time lag between the release date and the target date by which the data or statistics should have been delivered.

Accessibility: the ease and conditions with which statistical information can be obtained.

Clarity: the availability of appropriate documentation relating to the statistics and the additional assistance that producers make available to users.

Coherence: the ability to reliably combine statistics and data sets in different ways and for various uses. Consistency is often used as a synonym for coherence.

Comparability: the extent to which differences in statistics from different geographical areas, non-geographical domains, or over time, can be attributed to differences between the true values of the statistics.

This is a great list and, on its face, a very useful one.

But the problem was that as soon as the World Health Organization declared a pandemic, the way public health data was being used changed. The general public quickly became a core user of the data, along with the reporters who tried to prepare compelling summaries of the data for consumption. And various universities competed for more timely and concise packaging of the data, using their brands to create a leading position in what became a marketplace for better information.

Speed was one of the single biggest changes in how data was produced. Also, impacts to state and federal budgets became material, relative to data inaccuracies that were reported.

And the quality measures of timeliness, punctuality, accuracy, accessibility etc. of the data, changed relative to each other as the users of the data changed.

Examples include reporters who needed to navigate charts and trends to reports, some for the first time. Consumers widened to include different groups of policy influencers or opinion makers (outside of public health), some without any statistical background, as well as citizens, many of which who had not interacted with data on this scale since school, or never.

Decisions about whether to shut down local schools, businesses and how governments should be responding were taking place in days or weeks. Not years. The amount of time for all of these new audiences to understand the data needed to be near immediate.

In response, some health organizations were already developing flash reporting products and pre-digested statistics. Dashboards were the norm over reports with interesting, clear and simple visualizations. Story telling structures were expected and built into the way the data was being laid out.

NSOs leaned their established frameworks for transparency, independence and other trust as driving factors to weather the changing landscape.

But the level of observed misuse of reported COVID data became a unexpectedly dominant trend that was not foreseen and threatened the value position of the published data and its central place in the COVID response. Further, as response policies of governments became harder to accept, this mistrust shifted to the data itself. Independence became not a feature, but a highly scrutinized key design element for every data artifact produced.

The current model proved, even with recent data improvements, NSOs failed to address the shifts in the user environment.

What went wrong?

Our Lopsided Value Lens

Value is not solely derived from how a thing is produced. Value is also determined by how a thing is perceived by the consumer or users.

The schools of thought on value are diverse and to address our emerging value problem, we should understand how value is framed by those who get to decide the value question.

The below model is my own reframing of the report’s model (to make it more readable), as we are an audience of practitioners and not academics. Feel free to refer to the original model to see how I did in my translation.

The Value Framework

No alt text provided for this image
Value Framework - All Schools of Thought

Without going into the weeds, the quick takeaaway is that there are two broad schools of thought on what consistutes value. We are pretty good as the observable, easily countable production perspectives. We can create standards and boundaries all day. Then we tend to stop.

But don't do as well with the consumer side. It's more time consuming and expensive to attain, so we deprioritize for speed and the perception of progress vs insight, which can get messy and harder to deal with.

But the model leaves us with on compelling conclusion. Quality is not a direct and singular correlative to value. We need a wider lens my friends to understand if our stuff is in fact, valuable.

If we look at the framework that clarifies the work of the UN Task Force to capture all of the schools of thought that relate to value, we clearly identify most of our traditional measures as fitting nicely into boxes on the right. But not so much on the left.

The left side of the framework is embedded in the minds of the consumer, where subjectively reigns supreme. ?It for a marketing response where words like persona and stakeholder positioning are just as relevant as on time and on budget.

The conclusion drawn by the UN Task Force on data quality ended up with this sentiment: “Quality, in essence, is the ‘degree of excellence’, while value is the subjective assessment of that quality that makes something desirable.”

So, if quality is not a synonym to value (though it may well share the some realestate), we have work to do.

We cannot rely on our service or product’s objective excellence as a reliable proxy that can effectively and continuously (through change) answer the all-important query; is anyone going to by your stuff? Read your stuff? Use your stuff? Etc . . .

This simple aspect of the subjective nature of the value question is why we still have Marketing and Brand departments thriving along side quality control systems, and engineering teams. The quality and measurable goodness of a product or service lives alongside the inner experience of the individual using the product or service.

The challenge is that there are limits to our ability to peer into the minds of others. And marketing resources are not available to every function or team that produces a product or a service. Therefore, the UN Task Force acknowledged certain limitations that framed their approach and expectations related to their efforts to satisfy subjectively held beliefs related to their statistics:

1.?????Demand for any data set fluctuates and therefore, may not be consistently relevant or highly valued.

2.?????Producers cannot pre-emptively know where users will perceive value.

3.?????All produced data is in competition with data from other sources.

4.?????Not every need for data can be or should be filled.

Given these caveats, what did the UN Task Force do to point data publishing efforts in the right direction? They adopted a framework to get closer to the consumers of their data and integrated these new practices into their proposed operating model. They used the D.A.R.E. model which stands for Dependability, Applicability, Respect/Trust and Ease of Use.

I’ve expanded on this framework by developing 10 questions under each dimension to elicit insight into the inner perspectives the other user group or stakeholder you are working with. If the model shows gaps from their perspective to your own, you can target that specific area of the relationship for growth by adjusting your style, acquiring new skills or deepening experience insights, to close the gap.

I’ve put a scoring model together to indicate areas of improvement for a fictional VMO (brining this back to contracted operations for a moment) that needs to work on a stakeholder’s perspective of their organization.

I can send you the model to experiment with it if you like.?

Just put “MODEL” in the notes and I’ll send you the Excel file.

I recommend you score your own function simultaneously as your customers or data users score your function/team using the same model.?

A sample scoring result is below.

No alt text provided for this image
D - For Dependability
No alt text provided for this image
A - For Applicable
No alt text provided for this image
R - For Respect and Trust
No alt text provided for this image
E- For Ease of Use


The model will not immediately give you a the answer for how far apart you are from your users and stakeholders. However, asking the stakeholder share the why for any delta between your perspective and their own is good start. Also, wieghting 1 or 2 of the questions from each section should further target your focus for time investment. Collecting these gaps and ranking them into user/stakeholder-prioritized themes, will create a roadmap and path forward.

If this model had been in use by our own CDC during the COVID pandemic, one wonders if some of the challenges that this organization faced could have been avoided or at least mitigated by minimal efforts by responding to this direct feedback into their decision making and data design and publishing processes for COVID data. Some efforts by the CDC along these lines are posted from their website. They acknowledge that being more agile and engaged with their new data users will require them to overhaul much of thier internal operations.

Having stakeholders complete such an assessment (to compare with your own) or even getting stakeholders to give perspective on a few of these questions informally, where you think you might have work to do, can shed light on where to apply measures on the “consumer side” of the value framework.

Also, some questions from the DARE model, could serve as leading KPIs themselves in terms of where data value likely stands and where it is headed.?Especially in the face of change and transformation, revealing how people see your data (and by extension, your team/org) on the broader value question, even as conditions change.

All that is required, is to add this value dynamic to your check-in conversations on an ongoing basis. Waiting for change to impact your strategy to inform your response, is too late.

Next week, we’ll explore what can be done to leverage this model to elicit measures of functions or relationships on both sides of the value model, and position our firms and key relationships to better navigate the changes no one can see coming.

Please share this Newsletter where you feel it may add value with your organization or network.

Regards,

Kirk Mitchell, Esq.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了