AI, Sense Making, and the Leadership Imperative
??Frank Smits, MSc, MA
?? International Change & Transformation Expert | ?? IT-enabled Global Transformation | ?? Program Management Specialist | ??? Multilingual Communicator
AI an ‘incuriosity engine’?
In his recent piece on Medium (€), Harris Sockel explores the risks of artificial intelligence becoming an incuriosity engine, warning that AI’s provision of instant, ready-made answers could erode our capacity for deep inquiry. He highlights the necessity of curiosity in knowledge work, arguing that intelligence is not simply about accumulating knowledge but about asking the right questions.
This challenge is particularly relevant for business leaders. As AI becomes increasingly embedded in organisational decision-making, leaders must ensure it enhances, rather than diminishes, human sense making. If AI is used merely as an answer-generator, we risk developing organisations that function mechanically rather than dynamically—operating on assumptions rather than actively engaging with complexity.
To navigate this terrain, I propose framing AI adoption through the lens of Complex Responsive Processes of Relating, a perspective developed by the late Ralph Stacey and his colleagues. This approach, which challenges the conventional view of organisations as predictable systems, offers a powerful way to understand how AI should be integrated into human decision-making. Instead of treating AI as a replacement for human insight, leaders must position it as a tool that enhances relational sense making.
The Challenge of AI and Leadership
AI is often framed as a means to optimise efficiency, reduce complexity, and provide quick solutions. Yet, complex organisational problems are not mechanical puzzles to be solved with a single correct answer. They are emergent and fluid, requiring continuous interaction, interpretation, and adaptation.
AI excels at pattern recognition, summarisation, and processing large volumes of data. However, it lacks the fundamental capacity for *sense making*—the human ability to navigate uncertainty through shared meaning-making, intuition, and relational engagement. AI does not question the data it processes, nor does it engage in dialogue about the implications of its outputs. It is only through human interpretation that AI’s results gain meaning.
For business leaders, this distinction is crucial. If AI is seen as a definitive oracle, organisations risk stagnation, outsourcing intellectual curiosity to an algorithm. Leaders must instead cultivate an organisational culture where AI’s outputs are subject to critical discussion, inquiry, and contextualisation.
Complex Responsive Processes of Relating: A sense making Lens
The *Complex Responsive Processes of Relating* (CRPR) perspective rejects the idea of organisations as static, hierarchical systems that can be controlled through centralised decision-making. Instead, it views them as emergent networks of conversations, relationships, and evolving patterns of meaning.
The late Ralph Stacey argued that organisations are continually recreated through everyday interactions. Meaning is not fixed but is shaped dynamically through dialogue, power relations, and the ongoing negotiation of expectations. If we apply this lens to AI, it becomes clear that AI’s role should not be to replace human judgment but to enrich the interactive process of meaning-making. A good, albeit poor sound quality, narrative from Ralph himself can be seen here. Worth a watch, if you have the time.
Leaders must therefore consider AI not as a decision-making machine but as a conversational partner—one that provides input to be examined, contested, and interpreted within the wider relational context of the organisation.
Practical Business Conversations is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Avoiding the Pitfalls of AI-Driven Incuriosity
To ensure AI enhances rather than erodes sense making, leaders should focus on three key areas.
1. Encouraging Active Engagement with AI Outputs
Instead of passively accepting AI-generated insights, organisations must develop practices that encourage employees to interrogate and contextualise these outputs. This means:
Example
A European supply chain company implemented AI to optimise inventory levels. Initially, the AI recommended reducing stock in certain regions, leading to unexpected shortages. When employees investigated, they realised the AI had relied on pre-pandemic purchasing patterns, failing to account for shifts in consumer behaviour. By actively engaging with AI outputs rather than treating them as definitive, the company adjusted its strategy and improved decision-making.
2. Fostering Organisational Curiosity
A culture of curiosity is the antidote to AI-driven complacency. Leaders must:
Example
An Australian financial services firm found that its AI-driven customer segmentation model was underperforming. Rather than blindly trusting the AI’s groupings, the company’s data analysts examined anomalies. They discovered that cultural differences in spending habits were being overlooked, which prompted refinements that made the AI more effective. This process of human curiosity correcting AI-driven assumptions reinforced the importance of active engagement.
3. Balancing AI Efficiency with Emergent Decision-Making
One of the most seductive aspects of AI is its promise of efficiency. Yet, in complex systems, efficiency should not be pursued at the expense of adaptability. AI can generate insights quickly, but it cannot account for the nuances of organisational life that emerge in real time.
Leaders must resist the temptation to automate decisions that require human discretion. Instead, they should:
Example
A South American healthcare provider used AI to assist in patient diagnoses. While AI improved speed and accuracy for common conditions, frontline clinicians noticed that rare cases were often misclassified. Instead of fully automating diagnosis, the organisation integrated AI as a support tool, ensuring human doctors remained at the centre of complex decision-making. This hybrid approach prevented AI from diminishing professional expertise.
Conclusion: use AI as a tool to aid Sense Making, not a replacement for it
The risks of AI replacing human curiosity and relational engagement are real. But these risks are not inherent to AI itself—they depend on how we use it. If leaders approach AI as an absolute answer-provider, they will cultivate an incurious, mechanistic organisation. If, however, they treat AI as a catalyst for richer dialogue and deeper inquiry, they can harness its power without compromising the human qualities that make organisations thrive.
Business leaders must therefore adopt AI with a critical, relational mindset—ensuring that technology serves human sense making rather than subverting it. The challenge is not merely technical but cultural: organisations must cultivate curiosity, encourage dialogue, and resist the allure of easy answers. Only then can AI truly enhance, rather than diminish, the complexity and richness of organisational life.
“AI is great at answering questions, but terrible at knowing which ones matter. That’s still our job—at least until it starts complaining about meetings too.” (Quote by the author of the article)
Profesor Asociado en Universidad ESAN
1 周Brilliant!
AI Ethicist. Book Coming March: Adopting AI: The People-first Approach// Keynotes: AI Agents and Ethics
1 周Very good Frank James Healy