What to learn in the age of AI

What to learn in the age of AI

The capabilities of new technologies improve daily. Our brains and our skills largely don't—or at least not yet. And neither do our job designs - leaving many feeling exposed. Charts like the one below referring to Generative AI's impact (and announcements of companies like Klarna repeatedly made) sound everyone's alarm bells.

Via Citi Global Insights

In the "tech + process + people" equation, the people side is possibly the weakest right now. That hiatus is causing anxiety, and clear answers aren't always forthcoming .

It is no surprise then that most of us today feel some urge to make ourselves (and our kids) resilient to the change caused by AI, particularly Generative AI's sudden surge. What is, however, particularly problematic is that together with the plethora of claims hinting at doomsday scenarios (and fueling some degree of despondency), we also hear too much wishful thinking and appeasement: "Stay close to what makes us human, things like empathy and creativity." The issue is that the average person's empathy , creativity , or the perception and manifestation of many other cognitive or emotional traits are, on an average day, often no better than today's machines, let alone tomorrow's. That's particularly true when AI is embedded into proper processes designed to automate or augment specific tasks alongside humans. That doesn't mean that all empathy and creativity work can technically and economically be automated. But it means this is no longer a safe, human-only ground.

Two datasets from recent McKinsey research (for Europe, but much of it translates elsewhere) are worth looking at. If you believe that you still have many possibly significant career changes in front of you, the first chart is interesting. It shows what type of skills the economy will continue to absorb, given the change in jobs available to people. For instance, data analysis will be easier to do by machines, as will literacy and communications - and will require fewer hours.

The following analysis is based on a survey. The downside of such an analysis is that today's executives have not fully developed a strong understanding of what AI can do and certainly not what it will be able to do. This said, it is useful because leaders and managers focus on existing jobs and may collectively highlight, at least directionally, the incremental steps that workers can take. The skills in the top right box are expected to be in most demand both now and in the next 5 years. The ones below the red diagonal line (which I added) are those that could become more important in the future compared to today.

While there are some differences compared to the previous chart (for instance, data analytics and entrepreneurship), the expected reduction of importance for skills like basic IT, basic data input, equipment operations, basic literacy, and gross motor capabilities is interesting. Similar, though slightly less pronounced, is the dimming of prospects for technician skills, equipment repair, and even general leadership.

Source: McKinsey, 2024

Despite their merits, neither of these analyses gives a full answer, showing the gap in understanding that still exists - a gap that we need to close to provide proper guidance to workers. While a proper answer is being developed and will likely continue evolving, the following principles can guide our individual and collective actions.

Don't try to outrun machines

For historical reasons, much of our training (and, unfortunately, job design) treats us as biological machines. The rote and repetitive learning focused on accumulating notions predates the time when knowledge was easily accessible to those who knew where and how to look. That type of learning counts as little more than mental gymnastics but often doesn't make us smarter than the machines.

Generally, we shouldn't try to outrun the machines on the things they do or will do well. For instance, the remembering of specific notions in isolation. That race can’t be won. That also means many surprising things, including some types of creativity, empathy, etc. For instance, we already know that AI machines are better divergent thinkers than humans in environments where the context is widely available in the training sets of the AI models - for example, coming up with basic consumer product or service ideas. For topics where context is extremely important, such as complicated business strategies or more niche applications, machines will be increasingly fed with the right content as part of creativity workflows, improving their performance.

In numerous areas, AI might become better than most people and, in some areas, all people. What machines lack in terms of representation and understanding of how the world really works, they often compensate through other means, including brute force, especially when paired up with humans who can point them at the right things and help them filter and recombine their own output.

We humans need to get going now.?

Today, tomorrow, and after tomorrow

There will be some stability in the short term (less than 3 years) and even longer-term in sectors that continue using legacy processes and systems because they can’t change or want to protect workers. That may be unsustainable from an economic standpoint and will also mask the underlying shift, which might prevent those organizations from helping their workers learn. That strategy might lead to a "termination shock" when those organizations can no longer buffer them from the new reality, and the time to adapt will then be too short. In the longer term, say 5-20 years into the future (which is well before when most of us will retire), AI augmentation will be the default choice, and the impact of automation will go deeper when it is within the frontier of the operationally (= technologically + process) possible.?

I make no predictions on what happens beyond that horizon. A mere extrapolation of the current trends already takes us into very different territories and misses the likely, sudden impact of breakthroughs—say, the ability to run orders of magnitude more computation because of energy-production discontinuities or computational efficiency.

Understanding what we do before you understand what we need to learn

Let's first try to understand what our work really is. A lot of what we do as human professionals can be broken down into three buckets:

Understanding and shaping the "why" of the work. That means forming a clear and actionable view of the reasons why we need to summon organizational resources to do something. That is arguably the job of most senior executives, but it also applies to frontline managers and increasingly decentralized and non-hierarchical environments. Doing a good job at that requires pattern recognition and a continuous sensing of the environment. Formalizing these processes isn't trivial. It requires understanding the interrelations between things in the world and, therefore, a representation of reality that transcends the semantic reasoning that AI models use. It also requires a continuous filtering of irrelevant information. Machines can increasingly complement (see the BCG/HBS research) humans in this process. For example, they can help us evaluate priorities and scenarios and might be able to do some of that autonomously, but it isn't clear how quickly they will be reliable in doing so and at what cost.

Identifying, shaping, and syndicating the "what." This is about matching problems to be solved with the categories of solutions available in the case of known-knowns (defined problems with defined solutions—e.g., how to run a product innovation workshop) or deciding that the problem belongs to the unknown-unknowns category (poorly defined problems for which solutions are not evident or may not even be easy to classify—for instance, how to make people learn for AI's age!). These processes require pattern recognition, including an intuitive understanding of what an organization can tolerate in its change management. Machines can already complement humans here, but humans' ability to think symbolically, with principled representations of the world (e.g., through theories and frameworks), is advantageous.

Identifying, shaping, syndicating, and implementing the "how." Machines are becoming increasingly good and fast at finding solutions for well-defined problems. Here, once more, what they lack in abstracting and using a symbolic representation of reality is compensated by their brute-force ability to connect dots in the semantic space, finding correlations between knowledge that humans have structured in their language. When productively paired with humans, they can help scan a broader horizon of possibilities. This also might mean creating change management plans, where machines can simulate various stakeholders' reactions and help devise personalized change management plans. Or helping humans keep tabs on the change process through more rigorous project management or detecting signals across enterprise communication channels.

So, what should I learn? The rise of "augmented thinking"

In this new world, our role becomes more of an orchestrator, a manager, and a strategist. Much of our work will be on the why and the what, and much of our "how" work will be human-in-the-loop quality control.

The tools will do a lot of the heavy lifting and act as an army of indefatigable interns (and increasingly better at approximating experts' capability levels). That means asking the right questions , including those that lead machines and others to ask you and your networks questions; critiquing questions and answers, individually and collectively; and getting to the right decisions, especially for complex (not necessarily complicated, which machines can tackle more easily) things; seeing patterns and behaviors of systems and using them to guide your and your organization's efforts.

What skills are needed for that? The following is not mutually exclusive or collectively exhaustive, and it is likely no more than directionally correct, but hopefully, it goes well beyond wishful thinking. In the short term:

  • Critical thinking and its application to framing questions well, critiquing and enhancing answers, general problem-solving?and creativity, and their AI-augmented use. Humans will still direct many of AI's problem-solving efforts for the foreseeable future. But we need to be good at it and know how to use machines for it if we want the combination of us and them to be better than them in isolation
  • People (and AI labor) leadership and management. This includes the classic and ever more important part of the curriculum - which is also about building good relationships through empathy. But it also includes "network leadership" (the ability to work human networks effectively), modern collaboration tools, human-machine interaction basics, keeping tabs on what machines can do and how to work with them - and helping teams do the same. But beyond the clinical and "economic" view of things, we better heed the importance of mental health and - for leaders - the value of bringing joy into the design and the lived experience of the work
  • System (not IT systems) thinking and the dynamics of large systems that lead to collective intelligence - as that is the architecture of the future. This includes classic skills like social influence but augments them with the requisite understanding of AI-augmented organizational design
  • Leading and managing the self: Individual resilience, adaptiveness, etc., including adaptiveness in learning, as things will invariably change. Part of this is about metacognition, learning how to learn constantly. Part of it is mental health.
  • The basics of digital technology. This is not about coding's syntax but rather digital architecture basics and a general understanding of how digital technologies work (including cybersecurity, IoT, algorithmic recommender systems, code architectures, and many others). Those are the building blocks of our future world; they allow us to interoperate with developers and help humans understand the logic of how machines' cognition operates. And they change all the time - this is the treadmill we have no choice but to be on
  • Domain expertise (including AI). This means going both deep in a few specific spaces and wide in others to develop a broad-based generalism and an ability to see patterns across very different environments. This helps us critique AI suggestions and point machines at the right dots to connect. Digital and other IT technology expertise, of the deep one type, is one of the fastest-growing segments among domain expertise skills. Additionally, irrespective of what deep expertise we leverage for a living, we must stay current on what AI can do for specific tasks in those domains.

These are currently somewhat disjointed, if interrelated, disciplines. Perhaps we need a new federative one called "augmented thinking." The new curriculum should be designed to equip individuals and teams with the ability to sense, remember, create, decide, act, and learn effectively in complex environments. It should integrate foundational thinking skills, including logical reasoning, analytical skills, and reflective thinking, with cognitive flexibility, problem identification, solution generation, critical thinking, decision-making, implementation, and continuous improvement, fostering both individual and collective intelligence. The curriculum would emphasize creative thinking, ethical reasoning, information literacy, communication, collaboration, emotional intelligence, and adaptive learning.

Source: Supermind.Design

This combination would natively address the opportunities and challenges of a tight synergy between humans and machines in the AI-enabled, individual and collective cognitive process.

Yet, we must recognize that very few things are certain, and it is important to prepare for uncertainty.

Two certainties

First, things will likely change radically in the long term, and the list above will evolve. But required capabilities and skills will likely hinge on?

  • Human intelligence. Individual (self, e.g., resiliency, initiative, efficiency) and people (including organizations, network) leadership
  • Machine intelligence. Individual machine and machine-network leadership, i.e., designing automation workflows that use AI extensively, influencing the behavior of groups or even swarms of AI agents
  • Collective intelligence. A combination of all of the above, i.e., influencing the behavior of large groups of machines and people - aka superminds . For example, things like ACI’s four pillars : (1) Find or help find network (human or machine) nodes, that is, entities that participate in the collective cognition; (2) Give the right incentives to the nodes to collaborate, e.g., through culture shaping, sophisticated business cases, or change management. (3) Harvest the right information into the supermind to help it overcome its possible insularity and trigger the association of relevant new ideas (4) Help the supermind collaborate - e.g., through the right framework to solve specific problems such as strategy, debate, etc. or by using the right technology tools or methods.

The following chart is one possible representation of the future canvas of management and leadership, or, in other words, what is for us to work on. In each of the boxes, there are specific competencies for humans at the (a) design, (b) build, and (c) run stages - future jobs will fit into one (or multiple) of those.

Source: Supermind.design

So, for instance, the skills required to design or build a workflow where multiple humans in the loops manage an agent in different process steps will be different from those required to be effective humans in the loop at run (or inference) time.

There is a second certainty. If this seems like a lot, it is because we need to learn from partially different disciplines and require a different, more agile, and faster approach to skill formation, which many companies today cannot provide. Given the pace of skill obsolescence (see chart below), this is not the right time to slow down innovation in skill infrastructure.

Let's conclude with an analogy. Builders use bulldozers—they don’t outpower them. They need to orchestrate the interaction between physical space and bulldozer so it is tractable to the machine, including following specific design instructions and cooperating with tradespeople who work alongside them. For a while longer, a chunk of real-world tasks and quite a few conceptual tasks will still need human support to make them tractable by machines.?Humans in the loop will continue to play an important role if we embrace it for what that is. Whether we want it or not, we humans are in the process of becoming mostly managers and even leaders with many assistants. Humans must direct the collective cognitive attention to the right things - the right "whys." We must ensure the approach is right - the right "what." We must critique the "how" that machines will increasingly suggest. These three steps require human-machine synergy, individually and in groups.?

We don’t yet know how to do that well. That’s the learning charter for the foreseeable future—for individuals and organizations.?

But let's be clear about one thing: we will spend a lot more time learning than we ever did before. Better get good at it—and get going now.


This essay is part of a series on AI-augmented Collective Intelligence and the organizational, process, and skill infrastructure design that delivers the best performance for today's organizations. More here .


Thanks for this great essay Gianni Giacomelli and for pushing the boundaries of our thinking. I like the framework of why/what/how to categorize the types of skills that we should be developing in our organizations. Key takeaways 1) double down on the why and what 2) become orchestrators and strategists 3) accelerate the learning in areas of augmented thinking.

Piyush Mehta

Chief Human Resources Officer and Country Manager, India, Genpact

4 个月

Absolutely Gianni Giacomelli, the rapid evolution of AI demands a proactive approach to skill?development. The work you initiated 4 years ago on Genome at Genpact is focused towards driving continuous skilling to build our skills stock at scale. This has been significant in building our data-tech-AI capability. In such transformative times, we must embrace augmented thinking and position ourselves as leaders in human-machine collaboration. Excited to shape the future with resilience and innovation! #AI #FutureofWork #HumanSkills

Marshall Kirkpatrick

Green tech & sustainability consulting

4 个月

Good contribution to this important discussion. Brings to mind something Forrester wrote several years ago: “To get the best relationship between machines and humans, we don't start with the machines. If we want smarter technology outcomes, we have to be smarter people." James L. McQuivey, of Forrester wrote. - "We look at fitness for the future through the metaphor of the plant in soil, especially the idea of the limiting factor. The plant is limited by the least present of the nutrients it needs to survive. We see that similarly being the case with how fit you are for the future. You might be really good at some things, you might be really curious, you might be really effective at trying new technologies, but if you're not collaborative, if you're not someone who has the emotional health to handle the ups and downs of the uncertainty of the future, that will be your limiting factor."

Simon Buckingham Shum

Professor of Learning Informatics / Director, Connected Intelligence Centre, UTS

4 个月

Hi Gianni: Re: The New Curriculum: We need "augmented thinking" ...you may be interested to join us in Dec at CI?edu'24 — 1st International Symposium on Educating for Collective Intelligence https://cic.uts.edu.au/events/collective-intelligence-edu-2024/

  • 该图片无替代文字

要查看或添加评论,请登录

Gianni Giacomelli的更多文章

  • Harden Your Ideas with AI

    Harden Your Ideas with AI

    Big and small ideas are rarely perfect when they first come out. The best idea creators and problem-solvers have solid…

    14 条评论
  • Your Problem-Solving Idea Flow, AI-Augmented

    Your Problem-Solving Idea Flow, AI-Augmented

    How do we use AI to augment human capabilities to generate better ideas and solve problems? There's immense potential…

    12 条评论
  • AI-Augmented Collective Intelligence: April-September 2024 Digest

    AI-Augmented Collective Intelligence: April-September 2024 Digest

    As you see from the rich curation below, there was much new under the (spring and summer) sun. Tools improved…

    4 条评论
  • We are GenAI's System 2

    We are GenAI's System 2

    The world is trying to understand the potential of Generative AI, and many resources —most—are going into improving AI…

    11 条评论
  • Can AI Make Us Great Beginners at Everything?

    Can AI Make Us Great Beginners at Everything?

    Most people, intuitively, think that for AI to be helpful, it should help us improve how we do things we are good at. I…

    29 条评论
  • Three skills get GenAI to do more for you

    Three skills get GenAI to do more for you

    GenAI is currently overhyped as a standalone technology and, at the same time, much underappreciated as a complement to…

    13 条评论
  • Relevance is (much of) what we need from AI

    Relevance is (much of) what we need from AI

    Most of us can solve complex problems because we get the right external input at the right moment, often over long…

    10 条评论
  • Capability + Effort: Exploring AI's Jagged Frontier

    Capability + Effort: Exploring AI's Jagged Frontier

    Much of the discussion on how to use AI - specifically Generative AI - is about finding appropriate use cases. Progress…

    20 条评论
  • Fluent Futures: GenAI and Language Learning

    Fluent Futures: GenAI and Language Learning

    One thing is clear to most people: GenAI holds transformative potential for learning and education - learning languages…

    14 条评论
  • Generative AI’s #1 job: worker augmentation

    Generative AI’s #1 job: worker augmentation

    There is still too much focus on Generative AI’s ability, or lack thereof, as a valuable, reliable standalone…

    16 条评论

社区洞察

其他会员也浏览了