What to learn in the age of AI
Gianni Giacomelli
Researcher | Consulting Advisor | Keynote | Chief Innovation / Learning Officer. AI to Transform People's Work and Products/Services through Skills, Knowledge, Collaboration Systems. AI Augmented Collective Intelligence.
The capabilities of new technologies improve daily. Our brains and our skills largely don't—or at least not yet. And neither do our job designs - leaving many feeling exposed. Charts like the one below referring to Generative AI's impact (and announcements of companies like Klarna repeatedly made) sound everyone's alarm bells.
In the "tech + process + people" equation, the people side is possibly the weakest right now. That hiatus is causing anxiety, and clear answers aren't always forthcoming .
It is no surprise then that most of us today feel some urge to make ourselves (and our kids) resilient to the change caused by AI, particularly Generative AI's sudden surge. What is, however, particularly problematic is that together with the plethora of claims hinting at doomsday scenarios (and fueling some degree of despondency), we also hear too much wishful thinking and appeasement: "Stay close to what makes us human, things like empathy and creativity." The issue is that the average person's empathy , creativity , or the perception and manifestation of many other cognitive or emotional traits are, on an average day, often no better than today's machines, let alone tomorrow's. That's particularly true when AI is embedded into proper processes designed to automate or augment specific tasks alongside humans. That doesn't mean that all empathy and creativity work can technically and economically be automated. But it means this is no longer a safe, human-only ground.
Two datasets from recent McKinsey research (for Europe, but much of it translates elsewhere) are worth looking at. If you believe that you still have many possibly significant career changes in front of you, the first chart is interesting. It shows what type of skills the economy will continue to absorb, given the change in jobs available to people. For instance, data analysis will be easier to do by machines, as will literacy and communications - and will require fewer hours.
The following analysis is based on a survey. The downside of such an analysis is that today's executives have not fully developed a strong understanding of what AI can do and certainly not what it will be able to do. This said, it is useful because leaders and managers focus on existing jobs and may collectively highlight, at least directionally, the incremental steps that workers can take. The skills in the top right box are expected to be in most demand both now and in the next 5 years. The ones below the red diagonal line (which I added) are those that could become more important in the future compared to today.
While there are some differences compared to the previous chart (for instance, data analytics and entrepreneurship), the expected reduction of importance for skills like basic IT, basic data input, equipment operations, basic literacy, and gross motor capabilities is interesting. Similar, though slightly less pronounced, is the dimming of prospects for technician skills, equipment repair, and even general leadership.
Despite their merits, neither of these analyses gives a full answer, showing the gap in understanding that still exists - a gap that we need to close to provide proper guidance to workers. While a proper answer is being developed and will likely continue evolving, the following principles can guide our individual and collective actions.
Don't try to outrun machines
For historical reasons, much of our training (and, unfortunately, job design) treats us as biological machines. The rote and repetitive learning focused on accumulating notions predates the time when knowledge was easily accessible to those who knew where and how to look. That type of learning counts as little more than mental gymnastics but often doesn't make us smarter than the machines.
Generally, we shouldn't try to outrun the machines on the things they do or will do well. For instance, the remembering of specific notions in isolation. That race can’t be won. That also means many surprising things, including some types of creativity, empathy, etc. For instance, we already know that AI machines are better divergent thinkers than humans in environments where the context is widely available in the training sets of the AI models - for example, coming up with basic consumer product or service ideas. For topics where context is extremely important, such as complicated business strategies or more niche applications, machines will be increasingly fed with the right content as part of creativity workflows, improving their performance.
In numerous areas, AI might become better than most people and, in some areas, all people. What machines lack in terms of representation and understanding of how the world really works, they often compensate through other means, including brute force, especially when paired up with humans who can point them at the right things and help them filter and recombine their own output.
We humans need to get going now.?
Today, tomorrow, and after tomorrow
There will be some stability in the short term (less than 3 years) and even longer-term in sectors that continue using legacy processes and systems because they can’t change or want to protect workers. That may be unsustainable from an economic standpoint and will also mask the underlying shift, which might prevent those organizations from helping their workers learn. That strategy might lead to a "termination shock" when those organizations can no longer buffer them from the new reality, and the time to adapt will then be too short. In the longer term, say 5-20 years into the future (which is well before when most of us will retire), AI augmentation will be the default choice, and the impact of automation will go deeper when it is within the frontier of the operationally (= technologically + process) possible.?
I make no predictions on what happens beyond that horizon. A mere extrapolation of the current trends already takes us into very different territories and misses the likely, sudden impact of breakthroughs—say, the ability to run orders of magnitude more computation because of energy-production discontinuities or computational efficiency.
Understanding what we do before you understand what we need to learn
Let's first try to understand what our work really is. A lot of what we do as human professionals can be broken down into three buckets:
Understanding and shaping the "why" of the work. That means forming a clear and actionable view of the reasons why we need to summon organizational resources to do something. That is arguably the job of most senior executives, but it also applies to frontline managers and increasingly decentralized and non-hierarchical environments. Doing a good job at that requires pattern recognition and a continuous sensing of the environment. Formalizing these processes isn't trivial. It requires understanding the interrelations between things in the world and, therefore, a representation of reality that transcends the semantic reasoning that AI models use. It also requires a continuous filtering of irrelevant information. Machines can increasingly complement (see the BCG/HBS research) humans in this process. For example, they can help us evaluate priorities and scenarios and might be able to do some of that autonomously, but it isn't clear how quickly they will be reliable in doing so and at what cost.
Identifying, shaping, and syndicating the "what." This is about matching problems to be solved with the categories of solutions available in the case of known-knowns (defined problems with defined solutions—e.g., how to run a product innovation workshop) or deciding that the problem belongs to the unknown-unknowns category (poorly defined problems for which solutions are not evident or may not even be easy to classify—for instance, how to make people learn for AI's age!). These processes require pattern recognition, including an intuitive understanding of what an organization can tolerate in its change management. Machines can already complement humans here, but humans' ability to think symbolically, with principled representations of the world (e.g., through theories and frameworks), is advantageous.
Identifying, shaping, syndicating, and implementing the "how." Machines are becoming increasingly good and fast at finding solutions for well-defined problems. Here, once more, what they lack in abstracting and using a symbolic representation of reality is compensated by their brute-force ability to connect dots in the semantic space, finding correlations between knowledge that humans have structured in their language. When productively paired with humans, they can help scan a broader horizon of possibilities. This also might mean creating change management plans, where machines can simulate various stakeholders' reactions and help devise personalized change management plans. Or helping humans keep tabs on the change process through more rigorous project management or detecting signals across enterprise communication channels.
领英推荐
So, what should I learn? The rise of "augmented thinking"
In this new world, our role becomes more of an orchestrator, a manager, and a strategist. Much of our work will be on the why and the what, and much of our "how" work will be human-in-the-loop quality control.
The tools will do a lot of the heavy lifting and act as an army of indefatigable interns (and increasingly better at approximating experts' capability levels). That means asking the right questions , including those that lead machines and others to ask you and your networks questions; critiquing questions and answers, individually and collectively; and getting to the right decisions, especially for complex (not necessarily complicated, which machines can tackle more easily) things; seeing patterns and behaviors of systems and using them to guide your and your organization's efforts.
What skills are needed for that? The following is not mutually exclusive or collectively exhaustive, and it is likely no more than directionally correct, but hopefully, it goes well beyond wishful thinking. In the short term:
These are currently somewhat disjointed, if interrelated, disciplines. Perhaps we need a new federative one called "augmented thinking." The new curriculum should be designed to equip individuals and teams with the ability to sense, remember, create, decide, act, and learn effectively in complex environments. It should integrate foundational thinking skills, including logical reasoning, analytical skills, and reflective thinking, with cognitive flexibility, problem identification, solution generation, critical thinking, decision-making, implementation, and continuous improvement, fostering both individual and collective intelligence. The curriculum would emphasize creative thinking, ethical reasoning, information literacy, communication, collaboration, emotional intelligence, and adaptive learning.
This combination would natively address the opportunities and challenges of a tight synergy between humans and machines in the AI-enabled, individual and collective cognitive process.
Yet, we must recognize that very few things are certain, and it is important to prepare for uncertainty.
Two certainties
First, things will likely change radically in the long term, and the list above will evolve. But required capabilities and skills will likely hinge on?
The following chart is one possible representation of the future canvas of management and leadership, or, in other words, what is for us to work on. In each of the boxes, there are specific competencies for humans at the (a) design, (b) build, and (c) run stages - future jobs will fit into one (or multiple) of those.
So, for instance, the skills required to design or build a workflow where multiple humans in the loops manage an agent in different process steps will be different from those required to be effective humans in the loop at run (or inference) time.
There is a second certainty. If this seems like a lot, it is because we need to learn from partially different disciplines and require a different, more agile, and faster approach to skill formation, which many companies today cannot provide. Given the pace of skill obsolescence (see chart below), this is not the right time to slow down innovation in skill infrastructure.
Let's conclude with an analogy. Builders use bulldozers—they don’t outpower them. They need to orchestrate the interaction between physical space and bulldozer so it is tractable to the machine, including following specific design instructions and cooperating with tradespeople who work alongside them. For a while longer, a chunk of real-world tasks and quite a few conceptual tasks will still need human support to make them tractable by machines.?Humans in the loop will continue to play an important role if we embrace it for what that is. Whether we want it or not, we humans are in the process of becoming mostly managers and even leaders with many assistants. Humans must direct the collective cognitive attention to the right things - the right "whys." We must ensure the approach is right - the right "what." We must critique the "how" that machines will increasingly suggest. These three steps require human-machine synergy, individually and in groups.?
We don’t yet know how to do that well. That’s the learning charter for the foreseeable future—for individuals and organizations.?
But let's be clear about one thing: we will spend a lot more time learning than we ever did before. Better get good at it—and get going now.
This essay is part of a series on AI-augmented Collective Intelligence and the organizational, process, and skill infrastructure design that delivers the best performance for today's organizations. More here .
Thanks for this great essay Gianni Giacomelli and for pushing the boundaries of our thinking. I like the framework of why/what/how to categorize the types of skills that we should be developing in our organizations. Key takeaways 1) double down on the why and what 2) become orchestrators and strategists 3) accelerate the learning in areas of augmented thinking.
Chief Human Resources Officer and Country Manager, India, Genpact
4 个月Absolutely Gianni Giacomelli, the rapid evolution of AI demands a proactive approach to skill?development. The work you initiated 4 years ago on Genome at Genpact is focused towards driving continuous skilling to build our skills stock at scale. This has been significant in building our data-tech-AI capability. In such transformative times, we must embrace augmented thinking and position ourselves as leaders in human-machine collaboration. Excited to shape the future with resilience and innovation! #AI #FutureofWork #HumanSkills
Green tech & sustainability consulting
4 个月Good contribution to this important discussion. Brings to mind something Forrester wrote several years ago: “To get the best relationship between machines and humans, we don't start with the machines. If we want smarter technology outcomes, we have to be smarter people." James L. McQuivey, of Forrester wrote. - "We look at fitness for the future through the metaphor of the plant in soil, especially the idea of the limiting factor. The plant is limited by the least present of the nutrients it needs to survive. We see that similarly being the case with how fit you are for the future. You might be really good at some things, you might be really curious, you might be really effective at trying new technologies, but if you're not collaborative, if you're not someone who has the emotional health to handle the ups and downs of the uncertainty of the future, that will be your limiting factor."
Professor of Learning Informatics / Director, Connected Intelligence Centre, UTS
4 个月Hi Gianni: Re: The New Curriculum: We need "augmented thinking" ...you may be interested to join us in Dec at CI?edu'24 — 1st International Symposium on Educating for Collective Intelligence https://cic.uts.edu.au/events/collective-intelligence-edu-2024/