Project Data?: An Unsustainable Approach

Project Data: An Unsustainable Approach

Machine learning is transforming a number of sectors and professions. Yet it is struggling to get traction within project delivery. ~$100 trillion will be invested globally in infrastructure projects in the next 10 years. The opportunity to make a difference is huge.

Yet a lot of organisations struggle with how to get started. Their approach is often to apply PowerBI and improve dashboards; they are making an important step forward, but it fails to address the fundamentals.

In a recent article Gartner highlighted that by 2030, 80% of project tasks will be performed by a machine. Its not just about automation, its also about finding the most efficient path through a project, improving delivery certainty and driving delivery productivity. McKinsey are also advocating the benefits of advanced data analytics, although I believe that their analysis is underplayed. I personally think that the early adopters could achieve this by 2025, or earlier if there is a commitment to make it happen. I am seeing organisations who are making the intellectual transition and fundamentally rethinking their business proposition. Is data an enabler to delivering projects or does data codify the hard won experience of delivery that enables organisation to rapidly improve and focus innovation?

I recently reviewed the data on 5 large projects. It amounted to 500GB, with 32000 documents and 25000 spreadsheets. I am aware of another project that was creating 3000 documents a month. This ignores the vast array of decisions that are captured within emails, slack, yammer or similar. Such an approach serves the here and now, but data becomes an unusable exhaust plume; a by-product of project delivery.

Organisations need to make a simple decision. Do they continue along the current lines, or do they believe that data can enable them to deliver projects more effectively in the future. The fundamental premise of machine learning is to learn from experience.

Some organisations will be sold tools and platforms to solve this problem. Although it may address specific use cases and deliver quick wins, the power of the data resides in its connectivity. Cause and effect, characterising lead indicators and predicting future risks and outcomes. Connecting 32000 documents, 25000 spreadsheets and millions of emails is a daunting task for even the most accomplished data scientist. Its a trajectory that will never achieve the end goal.

When we drilled into this data there were issues with data quality, consistency, timeliness etc. Even if we were able to correlate it all, the insights we extract will have holes in them or potentially be misleading. Drawing inferences that aren't statistically robust.

Organisations need to rethink their approach to data. What are the key challenges that they want to address and what data do they need to solve them? If you could envisage an AI future, how would your data architecture help to get you there? This isn't a tweak, its a fundamental review.

I read a post recently that advocated a evolution to project data analytics which deeply concerned me. Its a manifesto for bolt on consultancy services and tools, rather than laying the foundations for an AI enabled future.

This doesn't need to be a big bang. The first step is to ignite the professional imagination and illustrate what the future can look like. Create the desire for change. The next step is to develop a data strategy. A strategy that makes it as easy as possible to harvest and collate data. This means rethinking the current form, spreadsheet and document centric approach. Capabilities such as Powerapps and Flow can certainly help and they are already included in your O365 subscription. Its not as difficult as we may imagine.

Some organisations ask me where they start when their data is so bad. The first is to audit what is already there and assess its potential utility through a data science lens. The second is to develop a strategy to improve it, which may involve transforming the fundamentals.

An example. One organisation I spoke to suggested that the answer to improving quality is to give people a choice of hundreds of categories of data via drop down lists, so it makes it easier to segment and correlate. The theory is good, but human behaviour is such that we don't have time to correctly classify everything, particularly when there are any one of 5 categories that could fit the bill. The conclusion that management infer is that all the data is now classified so we are improving data quality. The reality is that we have made the situation worse, because we now have data in the wrong buckets and we don't know what % is wrongly classified.

An alternative would be to use data science. Let me take procurement as an example. We know what we order from each supplier. We know that bathroom suppliers provide bathroom supplies. When a delivery arrives, we should be able to classify it by inference, rather than drop down lists; or at least dramatically reduce the number of options. The key is in the correlation of data rather than relying on a human to classify it.

We also know that at a certain stage of a project it is predisposed to certain types of risk. We know this by correlating project types, context, risks and schedules. We need to be able to connect data together. We can begin to identify early warnings or lead indicators and get much better at sensing for and pre-empting issues.

The opportunities are vast.

We are entering a whole new era. One that will enable a transformation in how projects are delivered.

Are you ready for it?

Martin Paver is the CEO and Founder of Projecting Success, a consultancy that specialises in leveraging project data to transform project delivery. He has led a $1bn megaproject and a multi $billion portfolio office. He is the founder of the Project Data Analytics community, comprising over 2400 members who share a passion for leveraging the exhaust plume of project data. He regularly blogs and presents at international conferences, helping to ignite the professional imagination and inspire change.

Thomas Walenta

Researching the value of PgMP and PgM*** Mentoring. Building Wisdom. Striving for Humility. *** 1st project 1974 *** PMI volunteer since 1998 *** PMI Fellow, PgMP, PMP, and 31 years working for IBM customers. ***

5 年

Good article, Martin. Looking at BIM, construction project management might be quite far into digitalization of project and asset management (and the UK is leading). Wonder if it can be adapted to other industries and elevated to a general topic.

回复

hi Martin – some of these points crossed my mind during that the last meet up in London, specifically the accuracy/hygiene of data used by AI/Data scientists and the effort it entails to on-board it and clean it up during that on-boarding process. As you highlight, it’s a daunting task to say the least. In order to better leverage ML during the process, there needs to be a rich set of examples of what “right” looks like. Do we need to define ML/AI friendly standards and seek out organisations who would be willing to embrace to the machine friendly standard in parallel to existing processes to build such a baseline and more effective tooling for the data scientist tasked with on-boarding the project data? Machine friendly being self-describing, consistent, linked and extensible and, to your point, starting small without a drop-down list of hundreds of options?

回复
Richard (Rick) Dunk

Systems Engineer (not Technician)

5 年

This appears like an allegory to requirements linkage with a more diverse set.

回复

要查看或添加评论,请登录

Martin Paver的更多文章

社区洞察

其他会员也浏览了