Are Organizations Ready for the Transparency AI Will Create?
Lean Change
We push the boundaries of change management by experimenting with exceptionally modern ideas
The Future of Change is Here, and I wrote this, not AI.
Imagine you've created an AI bot for your transformation program. That AI bot is feeding real-time, anonymous information about the program into its collective for sense-making.
Imagine what'll happen when the assumptions of the leader on the hook for the transformation are found to be incorrect.
Imagine the insights tell them that people think there's a better approach or even a better change to make.
Imagine how that leader's boss will react, wondering why they missed the boat on what's truly needed.
Today, we can hide that. It's called the watermelon report, or change theatre, or town-halls with carefully crafted questions designed to not rock the boat. Change managers or project managers keep leaders in the dark, sometimes purposely, by for the fear of repercussions.
Controlling the Chaos
The skepticism surrounding AI is warranted and needed, but I think the majority of today's organizations are not ready for the radical transparency AI can create around transformational change initiatives.
Today, change and project managers control the narrative with status reports, update meetings, and typically have the ear of the sponsors and leaders. They can downplay those disgruntled employees who just like to complain and spin the narrative however they see fit.
Air Canada's AI Bot in Trouble for giving customer a discount: https://jalopnik.com/air-canada-ordered-to-pay-refunds-its-ai-chatbot-mistak-1851269976
Despite PMI's free AI course purporting that project managers will be the gate-keepers of AI implementations, thereby controlling it, no one can completely control it.
Now we're in chaotic waters. Is it ethical to instruct your AI with these instructions?
While you can't impose your will on other humans, you can with AI, depending on how you train and instruct it.
领英推荐
Radical Transparency
It shouldn't surprise anyone that the undercurrents and water-cooler conversations carry the real story about what people think about the change.
That stays hidden in hallways, text-messages, Friday night drinks, and other informal meetings of the minds.
That can't stay hidden with AI as your co-pilot.
Transformation is the hardest thing any organization will do. If it's absolutely necessary, leaders will embrace radical transparency.
If it's not, like most aren't, we'll end up running around in circles much like how most organizations have been doing thinking Agile would be its next saviour for the last decade.
The Ethical Dilemma
The only evidence I need to make this claim is the fact that every one of my feeds on every social network is littered with how to write a book in 10 minutes and make $65,000 in a month or how to make a viral video to get your 15 minutes of fame.
No one cares are the ethics of AI...yet. The reason is that we haven't seen the problems I've described here, but we will.
Thankfully UNESCO, among others, has been exploring the ethical use of AI for years and a few tech giants are on board. Spoiler alert, the ones who make the most popular LLMs are not on the list.
What Can You Do?
Many moons ago I helped facilitate a day-and-a-half workshop with 200 people. We had 24 facilitators and the CEO and CTO were there for the whole thing.
While planning the session we didn't coach them on how to act. We told them using an anonymous, live question tool like Sli.do was the best way to get the best questions from people. We talked about the pros and cons, and scared the hell out of the comms person, but they decided they wanted no filtering so we projected the live feed of questions and votes on the three giant screens behind them in the conference centre.
That's leadership. That's radical transparency. And that's what meaningful transformation needs.
Business Improvement, Board Advisor, Mentor, Investor
9 个月IMHO, AI today does not yet come close to creating transparency - if anything, the exact opposite. With a generative platform, both the data fed into it as well as the code to process responses would need to be transparent to create "true" transparency. I doubt that will ever happen, since not even those who feed these models data, don't know exactly what went in. The closer these generative systems come to "feeling" authentic, the more doubt there will be in the authenticity. Not dissimilar to "robots" that come too close to looking like "real" humans, we naturally have a negative "feeling/response" to them, so will likely be the response to a piece of technology becoming "too" human. Our brains know the difference. Either that or we will start watering our crops with Kool-Aid and wonder why nothing grows.
homo architecticus
9 个月The key question with AI is "who makes the decisions". The ones who can leverage AI for decision making successfully, can make advantage over the others. This is applicable to business decisions, as well as (gush) governance decisions. With the latter, the world will move to [viral] goal setting by popular voting daily (not a rep voting once every 4 years, that was for times where horses limited info reach), and Ai will show how can those goals be achieved. With DigitalID, we're almost there - what's missing is anonymity layer while not losing the provenance/validity of a vote.