?? Will AI sharpen or dull our minds?
Exponential View via DALL-E

?? Will AI sharpen or dull our minds?

I invited one of the leading experts working at the intersection of organisational design and AI-augmented collective intelligence, Exponential View member Gianni Giacomelli, to answer this question: Will AI dull or sharpen our minds?

Gianni’s career spans over 25 years in innovation leadership positions, including C-level at large tech and professional services firms — as well as in academic research and development with renowned labs including the MIT Center for Collective Intelligence.

You’re in for a treat — we opened Part 1 to all readers of Exponential View, so feel free to forward this post to your colleagues and friends.


Will AI sharpen or dull our minds?

Part 1: The challenge By Gianni Giacomelli

AI is percolating into our economy and society, and it surrounds humans in a way it never did before. It has become ambient. Is that good for our intelligence??

Some highlight the risks. The FT’s Tim Harford recently asked “will we be ready” to assist the AI when it needs our judgement to make a decision? Or will we get stuck in the “paradox of automation,” where humans lose the ability to intervene when AI systems need us to? Some scenarios are benign, but many others are existential: like pilots over-relying on automated flight systems only to crash the plane when the computer goes dark (see, for example, the tragic case of Air France 447).

In this first part of my commentary, I will break the question down into two:?

  1. What is the risk for the individual? (a) The risk of becoming less attentive, less critical, less creative, less proactive? (b) The risk of not developing some foundational skills anymore? Is AI going to deprive us of some learning by doing??
  2. What is the risk for our collective intelligence? This is not about the average or sum total of our individual intelligences but rather the emergent intelligence capabilities of the structures made of networks of people and assets (including machines) that behave collectively in ways that show intelligence above and beyond that of the individual components. Is that going to improve, or worsen?

Individual risks, and rewards

It stands to reason that some of the downside risk is real. But is it inevitable? And what is the upside??

The net effect of technology introduction has been in the past economically (and typically socially) positive in the long run. For sure, there can be huge volatility, and indeed dislocation, that sometimes last a long time. In exponential scenarios, with potential systemic instability, the past may not automatically be a good predictor of the future.

The research doesn’t seem to be fully settled, but there is some, and we can frame the problem based on a few examples:

  • The invention of agriculture didn’t make individual people smarter than hunter-gatherers. Some research even indicates that the size of our individual brain might have shrunk as our collective one, emerging from our societies’ networks, grew. BUT: without agriculture, the world’s society would likely be more primitive, and most of us wouldn’t want that world today.?
  • The introduction of the printing press might have reduced most people’s ability to recite books by heart, and even contributed to the disappearance of jobs such as professional storytellers. But the effect on individuals (printed materials aid cognition) and societies (knowledge management) was a net positive.?
  • Taxi drivers in London, after GPS introduction, didn’t have the same quality of spatial reasoning (and even their brain structure changed). BUT - did that make them worse taxi drivers? It seems to have helped less experienced drivers become more effective.?
  • Typewriting was bad for handwriting (and handwriting is likely related to some level of creativity), BUT that was more than offset by other gains. By some accounts, typewriting saved forty minutes out of an hour, compared with the pen. Automated orthography corrections are increasingly making us unable to thoroughly spell-check things alone, BUT that allows us to write more.
  • A study on the use of robots in helping baseball umpires shows that the combined human-machine duo improves performance over humans alone, especially for lower-skilled humans. Humans who after using robots don’t receive assistance anymore seem to not be able to get back to their original skill levels. BUT: The introduction of robots also makes the game less acrimonious, with fewer disputes and expulsions. And a recent study on the use of computer vision in tennis showed that human umpires show better judgement when technology is deployed alongside them.??
  • Even where humans lost the battle, like in playing Go, evidence shows that machines’ superiority pushed up the quality of the average professional Go player. After all, a game is supposed to be making us better - and in this case, AI competition did.

What about not developing some foundational skills?

Is AI depriving us of learning by doing??

How do we create stepping stones in some professions when machines do a lot of the entry-level work??

Consider modern finance, legal, and consulting professionals who haven’t developed, respectively, the algebra, writing, or handwritten storytelling skills of their predecessors. Does that make them less intelligent, or did that rather force them to develop skills that built off those machines, and spend more time on other tasks, such as interfacing with their stakeholders??

One transferable example comes from an unexpected place. About 10 years ago, there was a big concern in the Finance/Accounting community about the fact that the Finance Operations jobs were increasingly centralised in low-cost locations or outsourced, which means that future Chief Financial Officers (CFOs) wouldn’t grow up professionally by doing low-level work and then moving up. Ten years later, we don’t talk about that so much. For sure, some of the old skills, like the ability to spot mistakes in accounting systems, might have dwindled. Exception management, including its data mining and analytics component, instead of the daily running of operations, is where finance executives get trained for the top job. And indeed, they now learn how to have separate organisations run industrialised operations - as if they had their own supply chain. The new aspirant CFOs also have plenty of room for other capabilities that they can do more of: focusing on the crafting and the execution of strategy, sustainability, and partnering more closely with their peers and their organisations in running the business — as well as, of course, learning how to use advanced analytics and AI. Those who have embraced the change now thrive.?

Humans have historically adapted to the introduction of new technological tools by developing new capabilities that complement those tools and push productivity - writ large - higher. At least, they did it so far, and in the long run.?

via DALL-E

What about the collective brain?

The collective intelligence side of the story shouldn’t be conflated with the previous one. From the printing press to the telephone, from email to the internet, and from mobile phones to Google, the introduction of collective-intelligence-enhancing architecture has historically enabled an explosion of collaboration and substantially reduced the time to access new knowledge. As a result, our knowledge graphs, with both content and people as nodes, new relationships have changed and their edges are now able to connect more ideas than ever. At parity of individual intelligence, on balance, that has made us - and certainly could make us - collectively smarter.?

At the same time, algorithmic curation optimised on human tendencies has possibly deteriorated our ability to function cohesively as a society (see social media discourse polarisation, and at least partially related social polarisation - especially in the US), and likely impaired the resulting decision-making (political governance, or lack thereof, come to mind). The interplay between our godlike technology, Palaeolithic brains and mediaeval institutions* might very well not lead to a net-higher collective intelligence today, at least in the short run. There is a real risk of dulling our supermind, right here. We will know even more after the many elections of this year.?

Enter generative AI, and its alluring confidence, its ability to spin gratifying new artefacts in seconds, effortlessly. There is a real risk that many, too often, would get hypnotised, lower our guard and not exercise quality control. Some evidence points to humans “falling asleep at the wheel”. When the LLM made mistakes, BCG consultants with access to the tool were 19 percentage points more likely to produce incorrect solutions. And the range of ideas generative AI produces out of the box is not as good as what humans, collectively, would produce. Microsoft recently published a good literature review of the dangers of overreliance on AI.?

It is ours to shape

So the risks are real, but they don’t seem unavoidable. In the next essay, I will explore the solutions available to us today, and some frameworks to keep developing them as capabilities - human and technological - change.?

In Part 2, I will talk about how to scale humans in the loop, how to support people at different levels of capability, and how to design smarter networks of humans and machines so that we get collectively - not just individually - smarter.?

PART 2 IS AVAILABLE HERE


About Gianni Giacomelli: My teams and I envision and design organisations, services, and processes that use AI and other digital tools to transform work: from innovation to operations, from the front to the back office. To do so, I employ an approach derived from our MIT work on Collective Intelligence, as an organizational design principle, augmented by AI and other digital technologies. For decades, I have led innovation efforts where digital technology attacks complex business problems and their underlying processes and organizations. My career spans over 25 years in innovation leadership positions, including C-level in large, stockmarket-listed firms in tech and professional services, and collaboration with world-leading academics.


* This quote “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology” is attributed to American sociobiologist Edward O. Wilson.

Marshall Kirkpatrick

Research and research systems for marketing and strategy in green tech and sustainability.

1 年

I feel like this post captures a lot of the best spirit of Exponential View and reminds me of your whole book Azeem: it points out the dramatic consequences good and bad of new technologies, calls on us as a society to account for the costs better than we have in the past, and points to the empowering opportunities. Love it, looking forward to reading part 2!

Ivan McAdam O'Connell ??

Freedom Lifestyle Designer: From bank COO to helping people & businesses unlock new opportunities

1 年

Fascinating discussion - and I appreciate the attempt at a reasoned answer to this key question Reading these cases .. I see the outline of the future, and the changes - good and bad Change will happen regardless of our judgements, and there is reason to think it will be more good than bad - either way it is wise to be prepared ??

Sheri G.

Heart-Centred Leadership / L&D / Communications Specialist ERS | Risk Management, Cert NLP Master Pract

1 年

AI is just a tool, and like any tool, it depends on how you use it. My own experience with AI (ChatGPT) indicates that just like computers themselves - GIGO! Anyone who has done any work in communications knows that the person with the questions is the one who actually steers and influences the conversation, NOT the person with all the answers. So to get the most from AI, we all need to be more mindful of the questions we ask and develop better skills for asking well-formed questions.

Ashutosh Kumar Sah

DevOps Engineer @CoffeeBeans | Ex - Kredifi | Ex - Teqfocus | Microsoft Azure Certified: Az-900, Ai -900, Dp-900 | Oracle cloud infrastructure certified fundamental 2022 | Aviatrix certified DevOps cloud engineer |

1 年

Thanks for sharing,...!

回复
Yasmin Crowther

Author and Chief Insight Officer, Polecat

1 年

I really appreciate this article. I am also really interested to think about how AI potentially impacts the youngest generations. We now know how challenging social media has been in terms of mental health, and I think we are yet to scratch the surface of how children are getting immersed in gaming or even more dangerous online worlds at an ever younger age, and how addictive that is designed to be. Adults with their feet in decades when AI was sci-fi are in a better position to have a perspective on it today, and have built critical thinking muscles in a different world. Kids love screens and what tech has to offer, and sometimes that's great - I'm thinking of the reach of Greta Thunberg as a young teenager - but I feel a bit lost at how to think about the AI world coming towards kids who are still learning to do joined up handwriting, let alone sift and frame ideas, sources and facts.

要查看或添加评论,请登录

Azeem Azhar的更多文章

  • ?? What's the deal with Manus AI?

    ?? What's the deal with Manus AI?

    Six things you need to know to understand the hype The online discourse around Manus AI typically falls into three…

    8 条评论
  • AI’s productivity paradox

    AI’s productivity paradox

    I want to play a game of counterfactuals..

    11 条评论
  • Why the AI surge isn't like 1999

    Why the AI surge isn't like 1999

    Economist Paul Krugman sees parallels between the late-90s tech bubble and today’s AI frenzy. In my conversation with…

    4 条评论
  • What OpenAI’s Deep research means for search

    What OpenAI’s Deep research means for search

    Originally published in Exponential View on 4 February OpenAI released yet another add-on on to its growing suite of AI…

    4 条评论
  • ??DeepSeek: everything you need to know right now.

    ??DeepSeek: everything you need to know right now.

    My WhatsApp exploded over the weekend as we received an early Chinese New Year surprise from DeepSeek. The Chinese AI…

    38 条评论
  • ?? Stargate & DeepSeek R-1 – What matters

    ?? Stargate & DeepSeek R-1 – What matters

    In the past week, a lot was written about the US government’s “Stargate” partnership with OpenAI AND DeepSeek R-1…

    12 条评论
  • Davos Daily, Day 1

    Davos Daily, Day 1

    The energy here is different this year, so I’ll share my daily takes from the Forum to help you understand what it’s…

    6 条评论
  • ?? Join me live on AI, deep tech & geopolitics

    ?? Join me live on AI, deep tech & geopolitics

    Hi all, I am going live in two hours from DLD — one of Europe’s most important annual events focused on the…

    5 条评论
  • Five contrarian ideas about genAI in the workplace

    Five contrarian ideas about genAI in the workplace

    ChatGPT alone sees over 300 million weekly users—roughly 7% of all mobile phone owners worldwide. Nearly a third of…

    13 条评论
  • ?? AGI in 2025?

    ?? AGI in 2025?

    We can't ignore Sam's bet that..

    11 条评论

社区洞察

其他会员也浏览了