Why Valuing Project Data Is Misguided
Martin Paver

Why Valuing Project Data Is Misguided

I’ve seen?a number of?organisations recently offering services to value data and add this value to the company balance sheet. Although this may work for some types of data such as sales, will it work within the project delivery domain???

Let’s start with some fundamentals. When we run?projects?we know from our own experience that we tend to see specific problems emerge at different points of the project lifecycle. We develop our own heuristics. We sometimes codify these heuristics into lessons learned or we summarise our methods within a body or knowledge. But these are blunt tools when we live in such a data rich world.????

It is becoming increasingly possible to create models that provide a representation of reality. But these models need a critical mass of data. But who owns the data? Does it belong to the client, the main contractor or the supplier who created it? What happens if someone processes the data and turns it into something different such as a consultant; does it becomes theirs??In law (so I am advised) data is not an asset so it cannot be owned, which adds further complications.?

Let me provide 3 options:?

  1. Each company owns and protects its own data.?It values it on its balance sheet and seeks to acquire as much data as possible. It mandates that all suppliers?have to?provide?all of?their data as part of the contract so more value can be added to its own balance sheet; as such, the data on the supplier’s balance sheet becomes devalued. The ultimate client starts to think the same and begins to put agreements in place around how the data accrued from their investment can be used. We finish up with a battle where the biggest client with the most data will ultimately win, but at the expense of all of those in the supply chain.??

?Their motivation is then driven by how they can monetise this data by selling it to others or creating their own solutions.?Data generators then aspire to become software vendors. But what is the addressable market for such a software product? For example, if main contractor X develop a tool, will the other main contractors buy it from?them?or will they seek to develop their own? We create an ecosystem that is incredibly inefficient.??

?We also set up a framework for litigation as everyone in the supply seeks to squeeze value out of the data that is either created by them or passes through their hands.??

2. A vendor is given the data for a specific capability.?Organisations that sell P3M platform solutions?have the opportunity to?gain access to data as a by-product of being a tool vendor. Are you aware of what rights they?have to?leverage ‘your data’? Is this data limited to how the tool is used or the data that passes through the tool? If we begin to value?data?it is in their interest to accrue as much of your data as possible.??

?New vendors are emerging who have some great ideas but lack the data to train their models. They?have to?invest a lot of time in getting hold of data, via bi-lateral agreements and non-disclosure agreements. The more data they?have to?train their model, the more valuable the model (and their business) becomes.??

We create a closed system where those who provide the data only want to work with the bigger companies because the cost of liaising with hundreds of vendors becomes overwhelming. We severely constrain innovation.???

3. We?open up?the data. If we open source the data organisations will become exposed to commercial and legal risks, so it is unlikely that it can ever be open sourced. But we can put the data under independent stewardship, using a set of rules defined by those who provide the data, under the umbrella of a data trust. Essentially, we are democratising the data and as such this data becomes valueless on the balance sheet (we give it away).?


But is this commercial suicide? Why would organisations elect to do this? It depends on whether you view the issue through a narrow company lens or a?macro economic?lens. Let me try and provide some arguments:?

  • By pooling?data?we create the critical mass of data needed to feed AI much more quickly. It is helpful to refer to the theory of data network effects, where machine learning is used to analyse large data sets to learn, predict and improve. The more learning there is, the more value is generated, producing ever more data and learning and creating a virtuous circle. The benefit of driving up delivery productivity or improving delivery confidence may far outweigh the commercial benefit of a specific software tool or valuation of a package of data.??
  • By collaborating we begin to accelerate our understanding of the correlation between our problem statement and our data. We begin to develop a shared understanding of how our data pipelines need to evolve. This helps to drive a data culture within the organisation that is a fundamental pre-requisite for data-driven project delivery.??
  • We can benchmark ourselves against others that helps to create the business case for change. We move more quickly together.??
  • As innovators engage with our?data?we get rapid feedback on how to improve it. This may be data quality, scope, volume or similar. It may also extend to creating the systems that automatically capture the data at source.??
  • The tide lifts all the boats, which drives data quality and availability across the entire supply chain.??
  • There are data compounding effects. The more data we have, the better the ecosystem becomes.??
  • Clients, particularly public sector organisations, begin to contract for insights to be derived from project data; they leverage their experience that is codified in data. The more data they can access, the better the insights, the better their projects will be.??
  • We create a positive feedback loop that helps to demonstrate the benefits of collaboration. Client projects become more investable, so more projects emerge, which benefits everyone.??
  • An individual organisation may be able to outperform in the short term, but they will never be able to compete with such a community driven approach in the long term. Clients are likely to get to a position soon where they score organisations more highly who are collaborative and collegiate, penalising those who elect to go it alone, working to narrow self-interests.??

For me, option 3 is the only way we can go as an industry and community. It is the only option that creates a positive feedback loop for all.??It also enables all of us to make a huge impact on project delivery performance that will dwarf the perceived commercial value of data. But it relies on organisations freely contributing data for the benefit of the collective. Valuing data on the balance sheet is at odds with such an approach.



Martin Paver is the founder of Projecting Success, London Project Data Analytics Meetup and Project:Hack. His quest is to work with organisations to help bring new thinking to how they leverage value from investments in delivered projects, avoid the same costly mistakes of the past and exploit good practice. He uses the latest data analytics thinking and combines this with practical experience in bids, project delivery and knowledge management.

Chris Mackenzie-Grieve

A connoisseur of all things business with a mastery of Project Management and Procurement.Part time Procync. Experienced NED. Management and getting stuff done Consultant. Passionate about improvement and productivity.

3 年

I intuitively agree with your option here. However there is one thing having data and another knowing how to use that data to good effect. I am a big believer that in projects clients should “own” the data produced for their project but they must have the ability to turn that data into useful knowledge that will benefit their current and future projects. My model for VCP (www.visiblecp.com) is based around clients owning data and leveraging the data (VCP helps to do this) to the benefit of future projects. I would want to see a “shared” world of data driven improvement knowledge available to those who need/want it. Harnessing, curating and disseminating requires effort and resource that will need joint endeavours from clients.

David Porter

Managing Director @ Octant AI | Artificial intelligence that Spots Problem Projects Sooner

3 年

Good piece Martin. My view is that the data belongs to the party that pays for its creation. In the context of public works, we the taxpayer, through our public sector pay contractors to build a bridge, and the the community owns the bridge. But we also pay for the creation of data. So it’s a community resource. There are also good social reasons for this. We want to encourage small businesses to remain competitive. As time goes on the power will belong to those with the best algorithms and the best data. If we allow major corporations to continue to “squat” on the data we give them an unfair advantage and become more beholden to big business

To learn more about data and how it can help you in your job role, check out our Project Data Academy! https://projectingsuccess.co.uk/capability

回复

要查看或添加评论,请登录