The Data Paradox

The Data Paradox

Welcome to another edition of Attain. In this article, you'll read about:

How organizations are drowning in data but lack real insights, with 90% of resources going to data collection rather than understanding. The solution is a shift toward lightweight experimentation to make better business decisions, with an actionable how-to at the end.

WARNING - this is a massive 20 min read.

When Information Abundance meets Insight Scarcity

In 2023, companies worldwide spent over $250 billion on data analytics and business intelligence tools. By 2024, IDC predicts that enterprises will create and manage 143 zettabytes of data – roughly 150 trillion gigabytes. The modern business landscape has become obsessed with data collection, with organizations investing heavily in data lakes, warehouses, and sophisticated analytics platforms.

Data has indeed shown to be the new oil, especially with platforms like Reddit now selling "their" data to AI companies to train new models. What's being licensed isn't user content only. It's also behavioral and transactional data. In other words, how people behave online. That part of those deals is rarely - if ever - talked about.

You'd think this explosion in data collection and analysis capabilities would translate into proportionally better decision-making. However, a recent McKinsey study found that while 92% of companies have increased their investment in data initiatives over the past three years, only 24% report tangible improvements in decision-making quality. So, you have nice dashboards and reports, but the decisions are still slow and haven't improved.

We're drowning in data but starving for insight.

The Gathering vs. Understanding Gap

A survey of Fortune 500 companies reveals that:

  • 65% of data-related budgets go to data collection and storage
  • 25% goes to basic analysis and reporting
  • Only 10% is dedicated to understanding causation and running experiments

Read that again.

90% of all the effort goes into gathering, storing, structuring, and reporting data. Yet, only 10% goes into understanding.

This is a fundamental misallocation of resources. Companies invest heavily in answering "what" questions – what happened, what changed, what's different – while underinvesting in understanding the "why" behind the what. Why did this happen, why did it change, why are things different now?

The why is crucial.

It's what helps you make the actual decision.

This may sound a bit abstract, so let's be very specific. Look at the allocation of time and resources on data teams inside most companies.

  • 70% of data engineering time is spent on data pipeline maintenance. That's just making sure that data is being collected and stored.
  • 60% of data analyst time is spent cleaning and preparing data. That's making sure that data is structured and can be used in the first place.
  • 10% of a data team's time is spent on causal analysis
  • 2% of the time is spent on experimentation to test the hypothesis from analysis

The exact numbers vary from organization to organization, but the proportions do not.

If you look into the research by Anaconda's "State of Data Science 2023" report, you'll see how data professionals spend their time:

Data Preparation, Data Cleaning, Data Visualization, Model Training, Model Selection, Reporting and Presentations, and Deploying Models.

The majority of effort goes into handling data rather than understanding it. 82.8% of data practitioners report delivering measurable value to their organizations. I am not disparaging their efforts. However, the value they provide is managing the data, not in deeper insights.

Organizations have built sophisticated systems for capturing and measuring everything that moves. But this wealth of information hasn't been turned into actionable insights into why things are happening. Without the right insights, the business decisions aren't effective.

If you look at the past 5 years, the gap is widening. Data collection has become cheaper and more automated, and companies continue accumulating more data. Their capacity to get meaningful insights remains relatively static.

In other words, people are scaling their ability to capture information faster than the ability to understand it.

That's a problem.

The "Hyperactive What, Lazy Why" Syndrome

This bias towards collection over understanding is what Rory Sutherland calls the "hyperactive what and the lazy why" syndrome. I'll go into this briefly, but the problem isn't just about the 90/10 effort split. It's about how companies approach business problems. I imagine this is true for other organizations like government agencies and NGOs, but I don't have enough insights or experience in those fields to comment.

I recently did a product review for a customer in Asia. It was a large enterprise product effort. Here is what I noted:

  • Data Collection: 1000+ hours per quarter spent on collecting, cleaning, and organizing data
  • Data Analysis: 400+ hours creating dashboards and reports
  • Causal Investigation: Less than 100 hours exploring why patterns exist
  • Experimentation: Fewer than 30 hours testing hypotheses

To put this in perspective, they had entire teams working on building the product and collecting data for a quarter. But they only ran 2 experiments in 3 months.

For every hour spent understanding causation, organizations spend 20 hours gathering and processing data. What's the output of all this focus on data? Well, here are my observations across 5+ customers in one industry:

  • Weekly executive dashboards: 15-20 per department
  • Monthly trend reports: 25-30 per business unit
  • Causal analysis studies: 2-3 per quarter
  • Controlled experiments: 1-2 per quarter

The Analytical Assembly Line

Data analytics in the organizations I observed have turned into an assembly line. That may be a symptom of the tayloristic model that many organizations still have. Most data is just presented upwards instead of being used for decision-making in teams.

I often overheard terms like "massaging the data" and "tweaking the report" to make sure the executives get what they want to get and hear what they want to hear. It doesn't take a genius to recognize that this is dysfunctional.

If you look into the massive investments in data initiatives, you'll see that most are struggling with basic data preparation and cleaning. The 2023 NewVantage Partners survey shows that only 23.9% of Fortune 1000 companies have created truly data-driven organizations.

I wonder, what % of these 23.9% prioritize understanding over the speed of delivery? Outside of surveys, out in the field, I saw teams rushing to simplify their analysis to meet reporting deadlines. The fact that the latter exist is already a sign that you don't have a "data-driven" organization. In most places, the emphasis on data preparation and cleaning leaves little time for deeper investigation.

But the bigger problem is this: data analysis often serves to confirm existing beliefs rather than challenge them (MIT Sloan Management Review, 2023). Executives rarely investigate patterns that don't fit their existing mental models. You choose the data you want.

The appearance of data-driven decisions reinforces existing biases and assumptions.

So Are We Getting Better at Decisions?

Despite exponential growth in data investments:

  • According to Gartner, 87% of organizations are classified as having low business intelligence and analytics maturity
  • McKinsey reports that less than 20% of companies tracking KPIs can directly link them to value creation
  • A 2023 Harvard Business Review study found that 76% of executives were less confident in their decisions than five years ago, despite having more data

More data and sophisticated tools, but less confidence, slower, and poorer decisions. Looks like we made the problem worse. Why? Because data collection is not understanding.

Consider Dr. David Metz's research on sustainable transport policies. For decades, billions were invested in transport based on this assumption: people want to save time. Many governments made infrastructure investments focused on speed. Sustainability and quality of life aspects did not go into them. So, we're talking about airports, large train stations, and expressways. The models were built around time-saving metrics. Faster, faster, faster!

So what was the outcome? When given faster transport, people don't save time. They travel more. They travel further. The entire premise underlying billions in investment was wrong. We now have cities clogged with Ubers, Tiers, more busses, more trains, more everything. Are we moving any faster? According to Metz, not really. No one questioned the initial "why."

You'll find this exact pattern across industries and organizations.

Major retailers invested millions into detailed sales tracking. I worked with fashion and sports retailers with sophisticated decline analysis reports and implemented price-based solutions. They had data teams, ML teams, trained their own models.

But they failed to address the root causes. In 2023, despite more data than ever, retail bankruptcy rates increased by 57% - ouch.

Look into customer satisfaction scores across industries. Most are in decline. Yet, companies spend billions on customer data platforms; they create detailed customer journey maps, and they track everything. But do they understand their customers?

A question I ask every product manager in every cohort I've been training in the past years:

When was the last time you picked up the phone and personally talked to your customer?

1 out of 20 respond with "last week". For 90% the answer is "never". Their insights come from data reports, second hand studies, other teams, some transactional data or just the assumptions given to them by their stakeholders.

This is a major reason why 70-80% of new products fail within their first year. Companies have data over their customers, but the people in new product development, don't actually understand causation. Billions is wasted on product that never find market fit, but are also never killed. They just fizzle on, providing everyone with crap experiences.

The cost of shallow analysis

Nobody (that I met in product development) quantifies the cost of lacking the insights. For any entrepreneur or solo builder, it's clear that these compound over time:

  1. Initial misdiagnosis leads to wrong solution ideas
  2. The wrong solutions generate more data about why they're not working
  3. More data leads to more superficial analysis, teams start to tweak the wrong things
  4. The cycle continues, each iteration more expensive than the last

Shallow analysis creates invisible costs. Real opportunities never identified because the causes weren't understood. Actual innovations never attempted because data didn't "support" them. Strategic advantages are lost to competitors who dug deeper. Trust is eroded in the organizations as decisions based on "data" repeatedly fail.

Like, remember when Andy Jessy said that "the data" supported more innovation when people work at the office. Yeah, well, Amazon employees called BS on that one.

The World Economic Forum estimates that by 2025, the global datasphere will reach 175 zettabytes. I don't even know how many zeroes that is. Sounds big. It looks like our ability to make good decisions is moving in the opposite direction, though. This isn't just an academic concern – it's a crisis in business decision-making that's costing trillions in misallocated resources and missed opportunities.

The solution isn't more data or better tools. It's in rethinking how we approach getting the right insights.

Focusing on Insights: Looking into Multiple Causes

Consider a retail sale – a seemingly simple business event. Traditional analysis attributes increased sales during promotions purely to price reduction. This misses what's going on.

Beyond price elasticity, sales create a powerful scarcity message. The limited-time nature of promotions triggers urgency and suggests uniqueness. This "when it's gone, it's gone" psychology drives behavior in ways that pure price reduction does not explain.

Social proof adds another layer of complexity. Overcrowded stores, crashing servers, and buzz in social media normalize purchasing behavior. Visible shopping bags create FOMO. Long queues reinforce value perception. The presence of other shoppers serves as a powerful validation.

Or, like my father always says: "If a restaurant is catching flies, don't go in."

But where is the focus in retail analysis? Price elasticity. If you oversimplify the problem, you get suboptimal decisions.

As Dan Ariely says, people are "predictably irrational". Individual behaviors cascade into collective phenomena. Social influence spreads through networks, creating emergent properties that can't be reduced to individual actions. At the institutional level, cultural factors shape how people in an organization interpret and respond to information.

Understanding this isn't academic. It effects how organizations approach problems. Instead of accepting default explanations, we must systematically question them, asking not just "why?" but "what else might be reasons?" and "what are root causes?". We shouldn't just accept the first answer but dig deeper to understand which different causes influence and amplify each other.

Time adds another dimension to this. Some causes operate immediately, while others create ripple effects over months or years. Understanding these temporal relationships helps to move beyond quick fixes & quick wins to create sustainable solutions.

I know, this can feel overwhelming. But it's not as complex when you actually start breaking it down. It's just more work. More rigor. But embracing that leads to better decisions. When we accept that simple explanations and solutions are no good, we begin to see system-level patterns that would otherwise remain hidden.

The Power of Alternative Explanations

Real value emerges when organizations open themselves to multiple causes. Think about the transport policy again. For decades, planners assumed people wanted faster transport to save time. But analysts finally considered that humans might optimize for opportunity rather than time savings. This single insight opened up entirely new solution spaces – from remote work policies to urban planning innovations in places like Copenhagen, which are optimized for walking and bicycles. You get ripple effects from these solutions - less traffic accidents, less stress, less pollution, and higher quality of life. Copenhagen ranked second in the prestigious Global Livability Index 2024, ranking first on mobility infrastructure. And yet, it isn't optimized for "faster." Curious, eh?

Similar transformations occur across industries when organizations embrace multiple causation thinking. They discover new intervention points, identify overlooked opportunities, and design more robust solutions. They create sustainable competitive advantages that can't be easily copied because they're built on deep understanding.

A question I always ask to product leaders in my trainings is: "How do you know?"

They always give a single cause, a single source, a single insight (if that). But we have to look at multiple potential causes to generate richer hypotheses and design better tests. We need deep insights, and we have to test those insights to see if they are correct.

Which leads to the question: how do we build an experimentation system that matches the sophistication of our causal thinking? In other words, how do we go from 90/10 to 50/50?

Building a Lightweight Experimentation System

Understanding multiple causes is only valuable if we can act on that understanding. Yet most organizations approach experimentation with the same process-heavy mindset that plagues their data collection efforts. It's time for a radical simplification.

The Problem with "Enterprise Experimentation"

The typical enterprise approach to testing new ideas reads like a parody of bureaucracy: dozens of governance forums to approve an experiment, lengthy pilot programs, proof-of-concept phases, steering committees, and endless stakeholder reviews. Most damaging is the misuse of the term "MVP" (Minimum Viable Product), which has been corrupted from its original meaning into "our first unfinished release that we're calling done."

This gets experimentation exactly backward. An MVP isn't a half-finished product – it's the smallest thing you can build to test a specific hypothesis. When Amazon first tested the market for Prime, they didn't build a logistics network. They offered two-day shipping to a small customer segment and manually upgraded their shipping selection at checkout. That's an MVP – a fast, simple test of a core assumption.

The Power of Small Experiments

Real experimentation should be constant, lightweight, and focused on learning. Instead of running three big pilots a year, teams should run dozens of small experiments a month.

When I talk about this in PM training, most participants stare in disbelief: dozens a month? They haven't even run half a dozen the whole year. How are they supposed to do that?

This shift requires three key components:

First, maintain an experimentation backlog. Just as development teams keep a product backlog, create a prioritized list of hypotheses to test. Each entry should follow a simple format:

"We believe [action] will result in [outcome] because [assumption]."

"We'll know we're right when we see [metric]."

Second, focus on velocity over scale. A small experiment that teaches you something new in two days is worth more than a perfect experiment that takes six months. Amazon doesn't test new features with all their customers – they often start with a single zip code.

Third, measure your experimentation system itself. Actually track metrics like:

  • Experiment velocity (how many experiments per month?)
  • Learning velocity (how quickly do you get results?)
  • Implementation rate (how many learnings drive changes?)
  • Failed experiment rate (if it's too low, you're not taking enough risks)

I always tell people to raise their hand if they track any of these. In the past 20+ trainings I have done with over 200 PMs so far, nobody has ever raised their hand.

Making Experiments Truly Minimal

The key to high-velocity experimentation is ruthless minimalism. For every experiment, ask:

"What's the smallest possible test that could prove us wrong?" To know that, we need to know what our critical assumption is. What is the thing we think we know to be true, but might not be? What are causes for this we aren't considering and what could we do to find them out? Mind you, these are the smallest things we could do. So think about a social media post, a call to a customer, an observation. Low cost, high impact activities.

When a retail team wanted to test if store greeters would increase sales, they didn't hire staff, design the greeting experience, etc. They just put one person at one store for two hours. Do we see a lift? Expand! Not getting expected results? Quickly move on.

This is how entrepreneurs outside of large corporations work. This approach requires changing how we think about statistical significance. Not every experiment needs 95% confidence. Not everything has to have a sample size of 100+ or a bloated experiment design. If a change has low cost and low risk, testing with a small sample size and accepting higher uncertainty is the right trade-off. But you actually need to f***ing test.

Building Your Experimentation Muscle

Don't go crazy trying to transform the whole company. People are already tired of change and skeptical of the next consultant-driven hog that's driven through the village. Don't make big announcements and initiate programs. At least not publicly.

Start small. Pick one team and one week. Set a goal of running three tiny experiments. Three experiments in one week is already more than what most teams are used to. Keep those as simple as possible. They might be as simple as:

  • Testing two different email subject lines
  • Changing the order of items on a form
  • Trying a new meeting format

The goal isn't to transform the team in a week. It's to build the muscle of continuous experimentation. If you've read Continuous Discovery by Torres (if you haven't go and get it), this is the counterpart approach to it. As teams get comfortable with small experiments, they naturally expand their scope and sophistication.

Over time, increase the frequency, going from 3 a week to 3 a day. Increase the variety: experiments on tech, marketing, pricing, etc. You can run experiments on everything. But don't fall into the trap of "throwing spaghetti at the wall" to see what sticks. That's not the idea here. The idea is to have an informed view of your hypothesis, then go and test them.

From Experiments to Systems

As your experimentation velocity increases, patterns will start to emerge. Your individual experiments will begin to connect to broader insights about customer behavior, organizational dynamics, and market opportunities.

At this point, the objection I get from PMs in training is:

"But Ahmet, we can't run experiments like that. We are a reputable company. We are regulated by [insert regulatory office here], and our brand can take a hit if..."

You get the gist. Here's my response to it: You're lucky if anyone actually notices that you've run an experiment. Nobody notices. Nobody cares. I'm not talking about things that can sink the ship. I'm talking about things that could be chalked up to human error. In fact, many interesting experiments have been just that: a mistake. Think penicillin. Or teflon.

The other response to it is that experiments do not just test a single hypothesis but help to look for multiple causes. A failed experiment often teaches us more than a successful one because it forces us to question our assumptions about causation.

By the way, the hardest part isn't the experiments themselves – it's maintaining momentum. It's building an experimentation system. I found three practices helpful:

1) Create visible learning artifacts. After each experiment, capture what you learned in a simple, shareable format. I'm not talking glossy exec pptx. It should be quick n dirty. Make these lessons visible and easily accessible to everyone in the organization.

2) Celebrate fast failures. The team that runs ten experiments and finds nine things that don't work is doing more for the organization than the team that runs one "successful" pilot. As a leader, ask your team: "What have you tried and failed at today? What did you learn?"

( as a side note, when was the last time anyone asked you that? )

3) Link experiments to outcomes. Track how learnings from experiments influence business decisions and drive business results. This creates a virtuous cycle where experimentation becomes seen as a critical business tool.

The goal isn't perfection. My own system constantly changes. I've run over a hundred experiments this year. The point is to make progress. Every experiment, no matter how small, moves us closer to understanding the true causes of business outcomes and making better decisions as a result.

Again, data gathering and analysis alone does not help in that.

No amount of statistical modeling and ML wizardry will give you the insights. You have to take what you assume is correct, then actually run experiments to see if you're right.

Most businesses "suck" at this. So simply being 10% better on this than your competitor will give you a massive edge. And it isn't hard to do, either.

Take Action: A Team-Level Guide

(Or How to Save $500,000 in Consulting Fees)

Let's be honest: if you took this problem to McKinsey or another top-tier consultancy, they'd charge you around $500,000 for a 12-week engagement. You'd get a beautiful 200-page PowerPoint deck, a transformation roadmap, and a recommendation to hire them for phase two. What you wouldn't get is actual change on the ground.

I know because I used to work with some of those big names. So, instead, let's talk about what your team can do next Monday morning.

Start With One Meeting. Pick your regular team meeting next week. Instead of the usual status updates, block 90 minutes for a different kind of discussion. Here's your agenda in only two questions:

1) What do we think we know about our [customers / product / process]?

2) How could we be wrong?

That's it. No frameworks, no consultants, no digital transformation required. Just honest questions about causation. A product team I worked with recently did this simple exercise and realized they'd built an entire feature roadmap based on a single stakeholder request from six months ago. Nobody knew why the request had come in, what it was really trying to solve, and why it was executed.

Implementing it would have taken 3 months and cost 80K. After a 15-minute debate, they decided to kill it instead. There, I saved you 80K in 15 min. You're welcome.

Create your first learning loop. By Friday, run your first experiment. This isn't about perfect methodology. I'm not going to write about wizard of oz prototypes, a/b tests and all kinds of other method porn. An experiment is a simple combination of "I think this is true." and "What can I do to quickly and cheaply test this?" That's it. No degree required.

One team I worked with tested a crucial pricing assumption by simply calling five customers and asking if they'd be interested in a different payment model. Then asked if they can send them a docusign to move to that model now. Total cost: zero. Time invested: two hours. Learning: priceless. It can be that simple.

So, here is a simple structure for your next week:

  • Monday: Question your assumptions
  • Tuesday-Wednesday: Design a tiny experiment
  • Thursday: Run it
  • Friday: Learn and plan the next one

The Two-Week Milestone

By week two, you'll have run at least one experiment and learned something new. Now expand slightly. Create your experimentation backlog in whatever tool you already use - Trello, Asana, a Google Doc, it doesn't matter. Write down every "we think" statement you hear in meetings. Each one is a potential experiment.

A marketing team I know started this practice and discovered they had 47 assumptions about their customer's buying process (!) When they started testing, they found that out of their five experiments, three assumptions were completely wrong. A consultancy would have charged them $25,000 just to document these assumptions. The cost of those experiments? A total of $50 and 10h of work across the team. Their insights saved them $30K (plus any consulting fees they would have paid).

The One-Month Milestone

After a month of small experiments, patterns will emerge. You'll start seeing connections between different learnings. This is when you begin your learning repository - again, using whatever simple tool you prefer. Simply capture the things you've learned from your experiments.

One product team I worked with created a simple word doc in their teams. They called it "Things We Thought We Knew But Didn't." It became their most valuable strategic asset, saving them from repeatedly making the same assumption-based mistakes. One of the things they uncovered was pretty straightforward:

They had built an app that was available on Android and iOS. They thought they had to deliver both because... well, that's just what you do, right? But running three simple experiments gave them a key insight about their customers. Most (>95%) did not have an iPhone. They killed the iOS version, saving $40K over the year.

Three Months In

By month three - the time it would take a consulting team just to finish their initial assessment - you'll have:

  • Run 12+ experiments
  • Built a clear picture of what drives your outcomes
  • Developed an experimentation muscle
  • Created a learning repository
  • Started making genuinely data-informed decisions

Total cost? Maybe a few thousand dollars in team time, a few hundred dollars in tooling. Total value? Well, one team I worked with discovered they were about to invest $2 million in solving the wrong problem. They found this out through a two-day experiment that cost nothing but a few hours and $100 for an online tool.

Common Obstacles (And How to Overcome Them)

Look, I'm not exaggerating when I say that experiments are hugely, widely underrated and extremely underutilized. You can spot some of the assumptions very quickly. I recently ran a product review for a customer in financial services, spotting a potentially critical flaw in their assumptions within the first 5 minutes. It was a $60K mistake. It took half an hour and two calls to check whether I was right. I was, they changed it, and saved $60K in under an hour.

With any objections you face, think about the potential upside you are gaining. The most common ones I keep hearing are these:

"We need approval for experiments."

No, you really really don't. I don't care how regulated you are. I worked with energy providers, aerospace, pharma, banks. These are some of the most regulated industries out there. Yet, you can (and should) run experiments there. You do need approval for changes to production systems. But experiments are about learning. If someone questions this, ask them if you need approval to talk to customers or analyze existing data.

"We don't have time for this."

You don't have time not to do this. One hour per week spent testing assumptions will save you months of building the wrong things or solving the wrong problems. Individual contributors have a lot of anxiety around this. They say that they can't push back against stakeholders or an executive that told them to do something.

I see this all the time. I recently got served ketchup, salt and pepper at a restaurant where I ordered an ice cream. Maybe they eat those differently there, but when I asked about it, the waiter simply said: "Oh, management insists we put those there for every customer." That's what you get in most organizations. Simple and thoughtless execution of orders, never mind the actual customer problem or need.

"We need proper research methodology."

No you don't, unless you are in academia. You are trying to get a rough understanding of the right thing. That is worth more than a perfect understanding of the wrong thing. Start small, learn fast, refine later. You'll learn methods and frameworks over time. But don't start with them.

That's what seriously p- I mean, irritates me - about all the product management influencers and gurus out there. Learn this framework, learn that method. Have you ever worked in a real company? Methods and tools are not the problem people have. Their problem is not being able to do the thing because it's not a practice. My advice is to get into the habit first.

"Our organization isn't ready for this."

Nobody is ever ready for anything. You aren't a unique snowflake. But you are right: Your organization isn't ready for a massive transformation program. The last 3 programs failed anyway. It is ready for a single team doing something new about their customers or process.

You don't need: new tools, budget approval, a transformation program, perfect methods, frameworks, senior executive sponsorship, complete data, or external consultants (like me).

You just need curiosity and willingness to learn.

There, I just saved you $500K in McKinsey fees. You're welcome. If you still need a 200 slide glossy deck, send me $500 via paypal and I'll shoot something over. ;-)

What to do now

Right now, open your calendar. Block 90 minutes next week for that first assumption-questioning session. Then send this article to three teammates and ask them ONE question: "What do we think we know about our [area of responsibility], and should we test that assumption?"

That's it. No consulting fees required. No transformation program needed. Just honest questions and small experiments that lead to real learning.

By the way, if you've read this far, you clearly care about making better decisions. Drop me a line here on LinkedIn and let me know what you discover. Your learnings might save another team from making the same assumption-based mistakes.

And if you must absolutely spend some money on external people like me... I have a big announcement coming up this week. It'll save you $50K within the first week.

Follow me here to learn about it.

Aman Kumar

???? ???? ?? I Publishing you @ Forbes, Yahoo, Vogue, Business Insider and more I Helping You Grow on LinkedIn I Connect for Promoting Your AI Tool

1 个月

Absolutely! It’s all about deriving insights, not just collecting data.

要查看或添加评论,请登录

ahmet acar的更多文章

  • Staying Current with AI

    Staying Current with AI

    How Anyone Can Stay Current with AI - A Simple Guide, Part 1 This guide is for people who do not have a background in…

    4 条评论
  • Forget Innovation. Focus on Improvement.

    Forget Innovation. Focus on Improvement.

    Companies across industries obsess over innovation. Design consultancies, startup entrepreneurs, and corporate…

    9 条评论
  • inconsiderate: why customer experiences fail

    inconsiderate: why customer experiences fail

    I got soaked by the tropical rain rather than staying another minute at the restaurant. We all have horror stories of…

  • The Role of AI Tools in Modern Work

    The Role of AI Tools in Modern Work

    People have been trying to enhance productivity since the beginning of civilization. The plow, the loom, the wheel.

    2 条评论
  • Embracing the Product Operating Model

    Embracing the Product Operating Model

    While at AWS, I helped 3 different companies transition into that model while training their product managers…

    2 条评论
  • Why Product Management doesn't work.

    Why Product Management doesn't work.

    I recently read Pawel Huryn's breakdown of Marty Cagans 20 product model first principles. They are well structured;…

    10 条评论
  • The Rise of Service-as-Software: Revolutionizing Professional Services

    The Rise of Service-as-Software: Revolutionizing Professional Services

    A seismic shift is underway in the world of professional services. The advent of generative AI is ushering in a new era…

    2 条评论
  • On Quality

    On Quality

    Easter Edition: On Quality and why we don’t have it As I am writing this, I am sitting at tthe white Diani beach in…

    1 条评论
  • Product Management in 5, 4, 3...

    Product Management in 5, 4, 3...

    This edition is an article on Product Management in 5 levels of complexity. I'll explain why this is important at the…

  • Early Hints: On Writing with AI

    Early Hints: On Writing with AI

    Welcome to Early Hints. A newsletter to help busy business leaders to stay on top of new developments.

    2 条评论

社区洞察

其他会员也浏览了