?? Will AI sharpen or dull our minds?
I invited one of the leading experts working at the intersection of organisational design and AI-augmented collective intelligence, Exponential View member Gianni Giacomelli, to answer this question: Will AI dull or sharpen our minds?
Gianni’s career spans over 25 years in innovation leadership positions, including C-level at large tech and professional services firms — as well as in academic research and development with renowned labs including the MIT Center for Collective Intelligence.
You’re in for a treat — we opened Part 1 to all readers of Exponential View, so feel free to forward this post to your colleagues and friends.
Will AI sharpen or dull our minds?
Part 1: The challenge By Gianni Giacomelli
AI is percolating into our economy and society, and it surrounds humans in a way it never did before. It has become ambient. Is that good for our intelligence??
Some highlight the risks. The FT’s Tim Harford recently asked “will we be ready” to assist the AI when it needs our judgement to make a decision? Or will we get stuck in the “paradox of automation,” where humans lose the ability to intervene when AI systems need us to? Some scenarios are benign, but many others are existential: like pilots over-relying on automated flight systems only to crash the plane when the computer goes dark (see, for example, the tragic case of Air France 447).
In this first part of my commentary, I will break the question down into two:?
Individual risks, and rewards
It stands to reason that some of the downside risk is real. But is it inevitable? And what is the upside??
The net effect of technology introduction has been in the past economically (and typically socially) positive in the long run. For sure, there can be huge volatility, and indeed dislocation, that sometimes last a long time. In exponential scenarios, with potential systemic instability, the past may not automatically be a good predictor of the future.
The research doesn’t seem to be fully settled, but there is some, and we can frame the problem based on a few examples:
What about not developing some foundational skills?
Is AI depriving us of learning by doing??
领英推荐
How do we create stepping stones in some professions when machines do a lot of the entry-level work??
Consider modern finance, legal, and consulting professionals who haven’t developed, respectively, the algebra, writing, or handwritten storytelling skills of their predecessors. Does that make them less intelligent, or did that rather force them to develop skills that built off those machines, and spend more time on other tasks, such as interfacing with their stakeholders??
One transferable example comes from an unexpected place. About 10 years ago, there was a big concern in the Finance/Accounting community about the fact that the Finance Operations jobs were increasingly centralised in low-cost locations or outsourced, which means that future Chief Financial Officers (CFOs) wouldn’t grow up professionally by doing low-level work and then moving up. Ten years later, we don’t talk about that so much. For sure, some of the old skills, like the ability to spot mistakes in accounting systems, might have dwindled. Exception management, including its data mining and analytics component, instead of the daily running of operations, is where finance executives get trained for the top job. And indeed, they now learn how to have separate organisations run industrialised operations - as if they had their own supply chain. The new aspirant CFOs also have plenty of room for other capabilities that they can do more of: focusing on the crafting and the execution of strategy, sustainability, and partnering more closely with their peers and their organisations in running the business — as well as, of course, learning how to use advanced analytics and AI. Those who have embraced the change now thrive.?
Humans have historically adapted to the introduction of new technological tools by developing new capabilities that complement those tools and push productivity - writ large - higher. At least, they did it so far, and in the long run.?
What about the collective brain?
The collective intelligence side of the story shouldn’t be conflated with the previous one. From the printing press to the telephone, from email to the internet, and from mobile phones to Google, the introduction of collective-intelligence-enhancing architecture has historically enabled an explosion of collaboration and substantially reduced the time to access new knowledge. As a result, our knowledge graphs, with both content and people as nodes, new relationships have changed and their edges are now able to connect more ideas than ever. At parity of individual intelligence, on balance, that has made us - and certainly could make us - collectively smarter.?
At the same time, algorithmic curation optimised on human tendencies has possibly deteriorated our ability to function cohesively as a society (see social media discourse polarisation, and at least partially related social polarisation - especially in the US), and likely impaired the resulting decision-making (political governance, or lack thereof, come to mind). The interplay between our godlike technology, Palaeolithic brains and mediaeval institutions* might very well not lead to a net-higher collective intelligence today, at least in the short run. There is a real risk of dulling our supermind, right here. We will know even more after the many elections of this year.?
Enter generative AI, and its alluring confidence, its ability to spin gratifying new artefacts in seconds, effortlessly. There is a real risk that many, too often, would get hypnotised, lower our guard and not exercise quality control. Some evidence points to humans “falling asleep at the wheel”. When the LLM made mistakes, BCG consultants with access to the tool were 19 percentage points more likely to produce incorrect solutions. And the range of ideas generative AI produces out of the box is not as good as what humans, collectively, would produce. Microsoft recently published a good literature review of the dangers of overreliance on AI.?
It is ours to shape
So the risks are real, but they don’t seem unavoidable. In the next essay, I will explore the solutions available to us today, and some frameworks to keep developing them as capabilities - human and technological - change.?
In Part 2, I will talk about how to scale humans in the loop, how to support people at different levels of capability, and how to design smarter networks of humans and machines so that we get collectively - not just individually - smarter.?
About Gianni Giacomelli: My teams and I envision and design organisations, services, and processes that use AI and other digital tools to transform work: from innovation to operations, from the front to the back office. To do so, I employ an approach derived from our MIT work on Collective Intelligence, as an organizational design principle, augmented by AI and other digital technologies. For decades, I have led innovation efforts where digital technology attacks complex business problems and their underlying processes and organizations. My career spans over 25 years in innovation leadership positions, including C-level in large, stockmarket-listed firms in tech and professional services, and collaboration with world-leading academics.
* This quote “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology” is attributed to American sociobiologist Edward O. Wilson.
Research and research systems for marketing and strategy in green tech and sustainability.
1 年I feel like this post captures a lot of the best spirit of Exponential View and reminds me of your whole book Azeem: it points out the dramatic consequences good and bad of new technologies, calls on us as a society to account for the costs better than we have in the past, and points to the empowering opportunities. Love it, looking forward to reading part 2!
Freedom Lifestyle Designer: From bank COO to helping people & businesses unlock new opportunities
1 年Fascinating discussion - and I appreciate the attempt at a reasoned answer to this key question Reading these cases .. I see the outline of the future, and the changes - good and bad Change will happen regardless of our judgements, and there is reason to think it will be more good than bad - either way it is wise to be prepared ??
Heart-Centred Leadership / L&D / Communications Specialist ERS | Risk Management, Cert NLP Master Pract
1 年AI is just a tool, and like any tool, it depends on how you use it. My own experience with AI (ChatGPT) indicates that just like computers themselves - GIGO! Anyone who has done any work in communications knows that the person with the questions is the one who actually steers and influences the conversation, NOT the person with all the answers. So to get the most from AI, we all need to be more mindful of the questions we ask and develop better skills for asking well-formed questions.
DevOps Engineer @CoffeeBeans | Ex - Kredifi | Ex - Teqfocus | Microsoft Azure Certified: Az-900, Ai -900, Dp-900 | Oracle cloud infrastructure certified fundamental 2022 | Aviatrix certified DevOps cloud engineer |
1 年Thanks for sharing,...!
Author and Chief Insight Officer, Polecat
1 年I really appreciate this article. I am also really interested to think about how AI potentially impacts the youngest generations. We now know how challenging social media has been in terms of mental health, and I think we are yet to scratch the surface of how children are getting immersed in gaming or even more dangerous online worlds at an ever younger age, and how addictive that is designed to be. Adults with their feet in decades when AI was sci-fi are in a better position to have a perspective on it today, and have built critical thinking muscles in a different world. Kids love screens and what tech has to offer, and sometimes that's great - I'm thinking of the reach of Greta Thunberg as a young teenager - but I feel a bit lost at how to think about the AI world coming towards kids who are still learning to do joined up handwriting, let alone sift and frame ideas, sources and facts.