AI vs human aspiration. Space.
The LPI (Learning and Performance Institute)
The global body for workplace learning professionals and organisations. We are making learning work.
Welcome to the third edition of 'Involve. Inform. Inspire.'
If you're not already subscribed, click “Subscribe” above to get notified when we release future editions. As always, let us know what you think in the comments section!
Could the new age of AI herald a future devoid of human aspiration?
Giles Hearn FLPI, Chief Marketing Officer, LPI
Unless you’ve been living under a rock lately, you’ll have heard about ChatGPT – the latest conversational language model developed by OpenAI. Able to generate freakily human-like responses to questions or commands, it’s received over a million user sign-ups since its launch in November 2022, and prompted the publication of thousands of feverish articles, blogs and videos, all raving about its capabilities. It can give you cookery recipes, write a speech or a poem, or even generate software code.
The reason ChatGPT is garnering so much attention is that it appears to herald the age of true artificial intelligence – the particular kind that Alan Turing had dreamed about in 1950. Turing’s test proposed that a system could be considered intelligent if it gave responses that could not reliably be differentiated from that of a human. ChatGPT, at least on the surface, appears to pass this test.
However, dig a little deeper, and it becomes clear that ChatGPT is not yet capable of truly understanding the complexities and nuances of human conversation. It's merely a large language model that predicts the most probable next word based on its training data; it has no concept of meaning, humour, irony, or many of the other cultural subtleties that make up the corpus of natural human interaction. And, although its responses seem confidently plausible, they can be riddled with errors. In short, it’s very much artificial.
Still, this hasn't prevented the hype from reaching fever pitch and, with new language models such as GPT4 on the horizon, it is only a matter of time before it edges closer to Turing's benchmark.
Also hogging the blogosphere throughout 2022 have been the increasingly sophisticated prompt-led image generators such as Stable Diffusion, DALL-E 2, and Midjourney. These tools are already capable of creating artistic works that rival those of humans.
Even the software development industry isn’t immune from AI's advances. Automated testing, code generation, and everything in between have been transformed by machine learning and artificial intelligence tools that either complement or replace activities in the software development process. For example, GitHub Copilot sits alongside the human programmer, automating repetitive tasks, fixing errors, and gradually learning the programmers’ style to write and edit its own software code.
So, everything is good, yes?
Well, not quite. People are getting nervous. Whilst not yet at Luddite-level hysteria, the Free Software Foundation has?branded?GitHib Copilot “unacceptable and unjust” and called for white papers that address the legal and philosophical questions raised by it. Stack Overflow has banned ChatGPT from its forums, calling it 'substantially harmful'.
The response of the artistic community to Midjourney and Stable Diffusion has ranged from apprehension to outright horror. Graphic artists who have spent decades honing their skills are already seeing commissions evaporating, as clients instead use AI tools to quickly and cheaply create artworks for their websites, magazines, and products. Beer can labels designed by AI? Check.
For many, these AI systems are a step too far; an existential threat to those who have dedicated decades to their craft; an imminent and unstoppable threat to their livelihoods.
However, I believe there is a deeper and more profound issue – the challenge to foundational learning and aspiration.
With a heartfelt sense of irony, I asked ChatGPT to define foundational learning:
“Foundational learning is the process of acquiring the basic skills and knowledge that are necessary for more advanced learning and development. This can include things like learning to read and write, developing basic mathematical and scientific concepts, and acquiring critical thinking and problem-solving skills. Foundational learning is important because it provides the foundation upon which more advanced knowledge and skills can be built. It is a crucial part of education and is typically acquired during the early years of life, although it can continue throughout a person's lifetime.”
A well-written and concise response. But here's the rub:
If people begin to use AI chatbots as a substitute for gaining foundational knowledge; if they start to rely on them to write entire articles, solve complex mathematical concepts, design products, create works of art, and write software code, where does that leave them in terms of how they progress intellectually?
It’s an open secret that the younger generations – those who have grown up in the era of the Internet and smartphones – have become accustomed to the instant availability of knowledge. With access to vast databases of information via their mobiles, their desire to learn has become less preferable than their ability to search. Or, in the case of ChatGPT, their ability to prompt.
I recently needed a Powershell script to automate something on my laptop. My usual route is to trawl through Google search results and find an example that someone has already written. But this always takes time, and I never seem to find the exact answer. So it occurred to me to simply ask ChatGPT to write the code – which it did – comments and all. With a single prompt, I had bypassed my entire need to learn, understand or even respect the Powershell language. This short-cutting is exhilarating at first, until you step back and think about where this could lead.
We live in an age of entitlement where entire generations expect instant gratification, with many believing they can achieve excellence in a field without undergoing the hard graft and rote learning of previous generations. AI is propagating this mindset at a rapid rate. Right now, ChatGPT is tempting us to short-cut our traditional routes to knowledge and expertise by providing quick and seemingly accurate responses to anything we ask. If ChatGPT knows it, then we don't have to. Might this negate the need to learn basic skills?
“Ah, you’re worrying about nothing”, you may say. “Did the invention of the electronic calculator make mathematicians extinct? Of course not!”
You have a point. But consider this:
Sue wants to play Debussy’s piano études. But she must first learn the notes, build her knowledge of scales, harmony, counterpoint, and tempo, and practice relentlessly. Only then can she hope to do justice to the composer's vision. Bob wants to create immersive video games, but he must first learn the fundamental concepts of logic, variables, expressions, and program flow.
Sue and Bob instinctively know that this is the natural order of learning. They accept it takes time, with much repetition and hard work. They understand that reaching higher modes of aspiration and achievement can only happen by building on what they have learned before. This desire to acquire more knowledge and skills is a genetic trait - a DNA-driven desire to evolve and develop – a hard-coded instruction set, wired deeply into the brain. Higher abstractions of thought and action grow organically from the firm roots of fundamental knowledge.
So, while you may be able to make marks with a paintbrush today, you won’t become a Picasso tomorrow. Yet, it can merely be the act of learning about brush technique that gives rise to dreams of one day having an exhibition at the Tate Modern.
This desire to want to improve – this potential – has been the driving force of human achievement throughout the ages. With Sue and Bob, we see them not just as budding pianists and software developers, but as two people who have the potential to forge new paths for themselves, building on their current capabilities to reach higher and further. Sue could become a film composer or an orchestra conductor, using her knowledge of the piano as a springboard into wider waters of expression. Bob can use his understanding of code and logic to become a systems consultant, analysing complex problems and directing the flow of business processes – never again having to write another line of C++.
Could these potentials be realised without knowing (or needing to know) the basics? I don't believe so. No Booker Prize was ever awarded to someone who couldn’t write a sentence. No astronaut was ever loaded onto the Space Shuttle who didn’t know about orbital mechanics. You don’t get a PhD before you get a Bachelor's degree.
Expertise is earned over time, and is powered by learning and aspiration. It cannot be gained instantly. It is the product of a long journey of hard work, passion, and self-discovery.
What will that journey look like in the future, when AI provides everything for us? What will our aspirations be then?
ChatGPT defines aspiration as:
“…a strong desire or ambition to achieve a particular goal or to become something. It can be a personal ambition or a desire to make a positive impact on society. Aspiration is often motivated by a sense of purpose or a passion for a particular field or cause. It involves setting high standards and working hard to achieve them, often requiring dedication, perseverance, and hard work.”
Dedication, perseverance, and hard work. Hmmm. But I don’t need any of that now, right? I just type in a prompt?
领英推荐
[ChatGPT] > write a complete PhD dissertation on creativity as a psychological concept and its implications for psychological methodology. Include at least 50 references to published works. Print the dissertation to PDF and email to my Head of Faculty.
OK, so I didn't have the nerve to actually type that into ChatGPT, but I bet someone's already tried it. The point here is that we may find ourselves in a future where there is no point. Why would we need to learn basic concepts and build knowledge if it was immediately there for us? Where is our destination if there is no journey?
In these early days of artificial intelligence, we simply do not know. We cannot predict the effect that AI technology will have on our ability to develop intellectually and our desire to reach aspirational goals. Humans have achieved many seemingly impossible dreams with and without the assistance of technology. But have we reached a tipping point?
The role of technology is constantly challenging what may be possible in human endeavours, and advances in AI call upon us to think cautiously about the consequences of our actions. Although we may have the power to fulfil our own aspirations now, there is a moral question about the impact on the freedom of others to live their aspirations in the future.
As Elon Musk warned at the World Artificial Intelligence Conference in 2019: “Artificial Intelligence will make?jobs?kind of pointless.”
We must be careful not to create a future in which AI makes learning pointless.
Next month, we'll discuss the role of AI in organisations, and what people are doing with training models to ensure they are contextually aware of existing company cultures, and how AI can change those company cultures.
Stay subscribed for "The Serpent Eats Its Tail and other stories" - next month!
8 CHARACTERISTICS OF EXCEPTIONAL LEARNING ORGANISATIONS
Giles Hearn FLPI, Chief Marketing Officer, LPI
In this third of a series of eight articles, we look at things that exceptional learning organisations do well - and suggest practical approaches to help you improve. Last month, we looked at encouraging leaders to be role models. This month we look at SPACE.
3. FIND AND DESIGNATE SPACE FOR PEOPLE
In today’s “always-on” world, it’s no surprise that mental health and wellbeing suffer. Many of us now experience fewer social and collaborative interactions with remote work colleagues, and a blurring of the separation between domestic life and the workplace. What we need is space.
Permission Space
Psychological safety - where people can freely express their views and make suggestions for improvement without fear of reprisal - is a critical component of the Exceptional Learning Organisation. Psychological safety is a group-level construct and, although it is by no means a new phenomenon, remote working, social media communities, and distributed teams means it may be harder to achieve than when we are gathered together in the same physical place. For example, employees who thrive in face-to-face meetings may find less permission space in a WhatsApp group, where the true intent of a text comment can be easily misinterpreted. Here are some tips on how to improve psychological safety in the workplace.
Leadership Space
The Exceptional Learning Organisation creates enough space for teams to carry out their tasks unimpeded by technological restrictions, micromanagement, or overly strict processes. Whilst some level of control is necessary, especially if projects are complex, leaders who step back and encourage independent work and decision-making in their teams benefit both themselves and their people.
Exploration Space
Curiosity, exploration, and experimentation are attributes that have been responsible for nearly all discoveries and progression in human history. When employees are free to learn as they do in the outside world, removed from workplace training schedules, mandatory compliance and rigid programmes, they rediscover their curiosity and find things that spark their interest. The Exceptional Learning Organisation knows this, and makes every effort to encourage and leverage these behaviours in corporate learning. Consider changing the way you validate how people are developing by moving them from an instructor/course-led model (how many courses they’ve done) to a self-creative model (how many ideas they’ve contributed).
Physical Space
The hybrid worker, flipping between home-based and office-based work can lose perspective of the mental-health benefits of genuinely being alone with their thoughts. Self-development requires physical 3D space and time to think, reflect, and improve. Simply going for a walk, or adopting asynchcronous work routines that allow for physical space can have dramatic effects on employee performance.
Conclusion
Space is good. Wherever learning happens, space is the medium through which it can percolate. The trick is in the balance - not too much, not too little. Too much permission space and you run the risk of offending someone; too much exploration space and you might not finish important tasks.
As Satnam Sagoo, Associate Chief People Officer at Imperial College Healthcare NHS Trust says:
“An exceptional learning organisation is one that has a nurturing environment and proactively lives its value - not one that takes a recreational approach to learning. As individuals we are constantly learning and yet if we had to pinpoint a good experience it would be when we felt safe, value, inspired, and supported.”
So - find your space; fight for it and enjoy it!
That's all for now. Have a great festive break and we will see you again in the New Year.
Like this newsletter? Please click “Subscribe” above to receive future editions highlighting practical insights from the world of workplace learning. Comment below to let us know what you think.
Stay involved, keep informed, and get inspired!
Learning strategy | Behaviour science-based learning design | EdTech
1 年Very thought provoking, thank you for sharing. It’s a complex space and there’s a lot to learn about the ethics and how AI may affect human motivation and behaviour. One thing is clear though, it’s changing the way we live quickly and the types of things we need to learn. Human never stops imagining, learning and evolving. And it’s the humans intelligence, creativity and experience that feed into expansion of AI technologies - it can’t be the other way around.
Head of Strategy@Blackbullion | Fellow, Learning Professional, Award Winner.
1 年Brilliant article, Giles - I wish I had written it! We cannot shortcut human development with technology - it makes us poorer, not richer. We need to understand the power of thinking for ourselves - it is freeing.
Great piece Giles, summarised nicely. It seems inevitable to me that some learning will become pointless. Or, it will be economically unnecessary - not worth investing in. Hard to be precise about what those areas are, but many administrative and numerical tasks seem like good candidates.