Impacts of GenerativeAI - And Our Certainty

Impacts of GenerativeAI - And Our Certainty

When i tell people i’m writing a book about Generative AI, the first thing they often ask is ‘oh, are you for or against it?’, which feels rather like they are asking if i am in favour of gravity.

Amidst gaslighting by politicians, the hyperbole of the tech bros, and the speed with which people tend to find certainty whilst remaining anchored within legacy paradigms, it’s easy to see why.

There is something almost hysterical about the stance, feel and fragmentation of the dialogue – or, as it may often feel – the disconnected monologues.

I thought i would share a few of the lenses through which i am seeing these new technologies, and the ways in which i seek to understand their impact, both short and longer term.

Firstly, i find it useful to distinguish between ‘change within a system’, and change to the system itself. The first relates to the ways we adapt, but within a scaffold of certainty and existing structure. The second relates to how change may fracture that very structure, and the emergent competitors may come from a different perspective altogether. Another way to consider this is a optimisation vs disruption – whilst this language is not perfect, i hope it will illustrate the principle.

Already we are seeing the widespread deployment of optimising applications: essentially those that make things quicker, cheaper, more accurate, or more effective, but substantially within an existing structure. This includes inline ‘copilot’ type applications (helping us to write better – deployed into tools like Word and various email and blog composers etc), narrative engines (helping us summarise, and spot patterns, define actions and even support accountability – deployed into tools like Teams and Zoom) as well as more creative engines (like MidJourney and Firefly, enabling creativity to become democratised, creating assets at speed and scale). Also emergent ‘partners’, like Scribe, supposedly acting as inline critical friends. All of these are either piggybacked into existing software (and hence existing paradigms of operation, procurement, risk and control) or within existing and known ways of working (jobs, companies, legislative structures of copyright etc.

Alongside these, and perhaps most visibly, sit the direct dialogue engines: ChatGTP, Bard etc, which have a combination of power plus accessibility: pretty much anyone can find value in these within ten minutes of starting to play. They are in the flow of our existing dialogue expectations and mechanisms, so we do not have to ‘learn a language’, or even really any very specific vocabulary.

These are the instances that may help us create, or cheat, better (whatever the distinction ends up being).

All of this is still within the familiar.

But beyond optimisation, and probably moving more slowly, although irrevocably, is the disruption. This is distinguished from optimisation in that it may not fit within existing societal expectations and structure. So AI may break ideas around creativity, productivity, profit, and class. It may fracture systems of education and law. Or of conflict and power.

I’m not saying that it will (it will), but it might (it will).

Indeed, it already is. AI written books, AI composed songs, AI generated art and essays, these are all disrupting markets, systems of perception, and systems of control.

Organisations, no matter how good they are at change (and generally they are quite poor at significant change) can only flex and bend so far. In my more speculative work i would argue that our future Organisations will be lighter weight, more reconfigurable (less bound into codified structure), will disaggregate aspects of ‘task’ and ‘role’, will be permeable to expertise, probably held within diverse ecosystems of capability holding bodies (new Guilds), and led socially, at least partially.

It’s unlikely that all of our Organisations will survive: emergent structures and underlying models will be radically empowered, not to optimise, but to subsume, subvert and re-author markets and services. Things we never knew we needed.

But today, we stand on our certainty: just this week i’ve heard people talk about ‘ethics’ and the ways they are certain that they operate (i am far from certain that they are even real), about ‘capability’, and how AI’s will never be able to do certain things (i am uncertain i could identify anything that they won’t be able to do, in time), and a widespread conflation of hope, fear, or desire, with fact.

Someone told me we will not have GeneralAI within a thousand years: i am unsure it’s wise to hold such a long term view, with such certainty. One of the definitions of ambiguity is a breakdown of precedent and prediction.

There’s also widespread confusion about semantics and taxonomies, and what, exactly, counts as real. I’m a pragmatist: as there is no universal definition of intelligence, then conceivably when a system does something that looks, sounds, and smells intelligent, then it probably it – at least in the pragmatic view of the real world.?

Everyone with an opinion can be a hero, but we tend not to reward the explorers: the people who are willing to be unsure and, specifically, to work very hard to remain uncertain.

Now is a great time to be uncertain: not to fall into a consensus view, but rather to learn, and be willing to build upon that learning. The comfort we may find in certainty may be a cold one.

For all of the polarisation – that AI will escape our control, doom us, is too biased to be trusted, or too basic to be creative, we may fail to spot the most valuable truths of all.

GenerativeAI is here, right now. Millions of people are getting the hang of what it can do. And many of the, are imagining new things that it may be able to do. There may be not such thing as a clear future ‘answer’, but rather a protracted period of disruption that will load progressive layers of pressure onto our legacy organisations, especially those who fail to create spaces to experiment, explore, watch and listen.

For all the conversations about the dramatic and often parroted outcomes, we may miss the small but important ones.

The everyday changes, the incremental waves of capability, the erosions of certain legacy structures of power and control, the blurring of certain boundaries, the gradual empowerment or disenfranchisement of whole segments of the population, and potentially both great productivity gains, but also the loss of certain valuable aspects of what makes us human.

Jim Goodell

Learning Engineering Toolkit Editor/Co-Author | INFERable Founder | XPRIZE Digital Learning Challenge Judge | IEEE Learning Technology Standards Committee Chair

1 年

Great insights here about AI’s potential as a disruptive force, that orgs and institutions tend to be inflexible and resistant to change. We have many examples from recent history of those who couldn’t survive major periods of disruptive innovation. “Now is a great time to be uncertain: not to fall into a consensus view, but rather to learn, and be willing to build upon that learning.“

回复
Kerri O'Neill

Chief People Officer London || Non Executive Director || Trustee || Business and ESG Transformation || Qualified Executive Coach and Mentor || Board People Committee Chair || Charity Ambassador || Chartered FCIPD

1 年

Thank you for your reflections Julian Stodd - found your optimisation versus disruption distinction very helpful.

要查看或添加评论,请登录

Julian Stodd的更多文章

  • Strategic AI: Look Beyond the Mirrors

    Strategic AI: Look Beyond the Mirrors

    I have some fantastic curators in my community: people who just consistently share great work, great resources, who…

    1 条评论
  • #WorkingOutLoud - Systems Dogma

    #WorkingOutLoud - Systems Dogma

    Sharing a few fragments of thought on Leadership and Systems and the ways they hold us steady and sometimes too safe…

  • Leadership by Force

    Leadership by Force

    I was speaking with a group today, doing some work around ‘power’, and they considered that they were uneasy with that…

    6 条评论
  • Strategic AI: How will we Learn and Lead?

    Strategic AI: How will we Learn and Lead?

    As I prepare for the first prototype of my new work on ‘Strategic AI’ I’m just #WorkingOutLoud with the new…

  • #WorkingOutLoud on the Planetary Philosophy

    #WorkingOutLoud on the Planetary Philosophy

    I’ve been immersed in this work today, and will be till the end of the week. Sae has carved out some time, and has…

  • Strategic AI: Domains of Disruption

    Strategic AI: Domains of Disruption

    I’m building out the materials for my new ‘Strategic AI’ workshop, based on the book ‘Engines of Engagement: a curious…

    1 条评论
  • Spaces of Safety

    Spaces of Safety

    Our Organisations must hold a somewhat unusual space, when stacked up against what we see in our broader society. As…

    3 条评论
  • Social Leadership Fragments: Permeability

    Social Leadership Fragments: Permeability

    The impact of social and collaborative technologies has been to make many boundariesmore permeable, with a range of…

  • Social Leadership: Organisation as Ecosystem

    Social Leadership: Organisation as Ecosystem

    Today I’ve been working on the new Social Leadership material, and specifically the notion of the ‘Organisation as…

    2 条评论
  • Fragments: Metacognition, Transdisciplinarity, Sense Making

    Fragments: Metacognition, Transdisciplinarity, Sense Making

    Some of the most exciting areas of learning research are considering features such as the ‘expert generalist’, aspects…

    1 条评论

社区洞察

其他会员也浏览了