Boost your digitalization: instrumentation
Even if many of us conceptually agree with the notion of data-driven decision-making, it’s my experience that many still fall back into old-fashioned opinion-based decision-making. These opinions are formed by experience and for most of history, making decisions based on experience was a great principle as the pace of change was very slow.
The challenge with digitalization is that it represents a radical rather than a gradual change. As a consequence, things that were true in the old world are no longer true after a digital transformation. This is critically important as it has important implications for experience: instead of being valuable, it likely becomes a hindrance and something that holds organizations back. The capability that companies relied on for years is suddenly problematic.
The point is not that all experience has become useless, but rather that we don’t know which aspects of our earlier experience are still valid after a digital transformation. So, the point is that we need to operate with a healthy dose of skepticism and follow the adage from Edwards Deming: in God we trust; all others must bring data.
The data is where, in my experience, the challenge starts in most companies. It often isn’t available, either because it isn’t collected, it’s hard to access or there are data quality issues. It may even be the case that you have the data, but people refuse to believe and use it as it conflicts with their assumptions about the world.
The starting point is to have the data available and this requires instrumentation of systems in the field. Instrumentation isn’t a new topic as most companies have some form of data generation in their products, but the data typically is concerned with quality assurance and, to some extent, performance. Few of the companies I work with have any insight into the actual usage of their systems and can’t even answer questions about feature usage and how their systems deliver value to their customers.
So, we need to rethink the way we instrument our systems. The existing quality and defect detection instrumentation should still be there, but we need to generate data that we can actually use for diagnostics and machine learning. In general, it requires that the data has more contextual information associated with it.
领英推荐
For instance, if you’re looking to improve the fuel consumption of an engine and you only collect real-time fuel consumption data, you’re not going to learn under what circumstances this consumption is high or low. You at least need to collect the requested power output and, likely, the gear the vehicle is in as well as the RPM to start to identify the situations where the fuel consumption is significantly higher than the requested power. With the combined information, we can start to develop hypotheses for testing in the field to improve the intended outcome.
The system architecture has to support instrumentation of functionality with as little effort as possible. This typically calls for architectural patterns where hooks are available to attach instrumentation functionality. In addition, as the total amount of raw data generated by systems easily becomes very large, forms of data processing to reduce volume and identify the most relevant parts are often required as well. This processing is preferably done close to the point where the data is collected or at points where multiple data streams come together. The architecture should, preferably, also allow for data processing functionality to be dynamically added to the system.
Especially in embedded systems, we tend to squeeze resources as much as possible to save costs. In this case, however, we need headroom in the system to allow for additional computation concerning data collection and processing. Also, we either need to include all data collection and processing into the system from the start with the functionality switched off until specific data is required or we need to support continuous deployment to allow for the addition of data-related functionality through subsequent software releases.
Digitalization requires data-driven decision-making as we need to be less dependent on opinions. This requires us to instrument our systems in the field so that we can collect the data required for decision-making. The system architecture should allow for easy incorporation of data collection and processing functionality so that we can answer relevant questions about system performance and customer behavior as well as experiment with alternative ways of realizing that functionality. Architecture is the physical incarnation of your real business strategy, so make sure you design it accordingly!
Like what you read? Sign up for my newsletter at?[email protected] or follow me on janbosch.com/blog , LinkedIn (linkedin.com/in/janbosch ), Medium or Twitter (@JanBosch ).
Founder & CEO @ Data Booster | DataIQ Awards shortlist 2023 & 2024
2 年We have a classic catch-22 here: you will only see the value of data-driven decision making once you have the right architecture in place, but you'll only invest in the architecture once you see data-driven decision making is delivering value. That's why it's important to learn from data-driven disrupters like Uber, Spotify and Airbnb.
The original Data PLuMber (fixing PDM leaks and blockages across the lifecycle to improve information/data flow)
2 年I was discussing this with Jos Voskuil on Friday. Will humans ever accept data driven decision making? Jos referred to an interesting experiment about court judge decisions, from the book ‘Noise’ by Daniel Kahneman. We trust planes to fly on autopilot but will we ever switch on business autopilot?