Creating a Culture of Responsible AI Use

Creating a Culture of Responsible AI Use

When ChatGPT launched at the end of 2022, attracting over a million users in just five days, generative AI was thrust into the spotlight. The frenzy of media attention resulted in widespread public awareness of the AI opportunity, almost overnight. Armed with some knowledge and access to new tools, employees began experimenting with new AI tools to see what it could do. Naturally, it wasn’t long until they started using it at work too.

It reminds me of the early 2000s when social media emerged. The immediate reaction from most companies was to restrict access or ban it completely. Yet they eventually realised that it presented a huge opportunity rather than an existential threat. Today, employees all over the world use social media to connect, communicate, and collaborate with customers, suppliers, and each other.

As I look at the same thing happening with AI, the question shouldn’t therefore be how to restrict its use but how to leverage the potential it offers. Of course, it is important to apply appropriate safeguards to protect both employees and the organization. But equally so is the need to engage and involve staff, provide training, and listen to their ideas and feedback—all while putting the appropriate policies in place, including how to use AI responsibly, a topic I’ve been closely engaged with recently.

The CHRO Principles for Responsible Use of AI in Organizations

At the end of last year, I was thrilled to join forces with a team of experienced CHROs and academics to discuss ways to make AI work responsibly in our organizations. Together, we formulated a set of guiding principles for use in HR: the CHRO Principles for Responsible Use of AI in Organizations. With these principles, CHROs can be better equipped to take a holistic approach to AI implementation in their organizations, ensuring it is aligned to company culture and goals. They are less about how HR teams themselves can use AI, but more focused on how to ensure AI is applied thoughtfully in a corporate setting.

We’ve boiled it down to four key principles and four handy tips that CHROs can adapt and adopt to suit their own missions and ambitions. I particularly love how they are focused on using AI to boost efficiency, manage risks, and ensure the ways in which they are adopting AI tools is aligned to their organization’s values and aims.

In this article, I want to introduce the four principles and explain why I think they are so important for any organization that wants to deploy AI responsibly.

Principle 1: Intentionality

The first principle calls on CHROs to champion the responsible use of AI, promoting alignment to business goals and organizational purpose. It’s all about having a clear plan for AI, making sure it’s doing good things for all parts of the business, and using resources smartly.

Principle 2: Leadership

The second principle is based on a firm belief that human oversight is essential for AI initiatives, which means that people should lead the way. This is by far the best—and perhaps only—way to promote accountability and organizational learning and ensure that we are always in control of where AI is directed and how it gets used.

Principle 3: Balance

There is naturally a lot of experimentation happening inside organizations, looking to see if and how AI can help achieve their purpose and goals. CHROs should encourage this innovation but may need to balance experimentation and risk management to improve how work is done. This means ensuring that AI development and implementation is done in ways that are transparent and explainable, and don’t create unacceptable risk.

Principle 4: Vigilance

The final principle encourages CHROs to pay close attention to unintended consequences. AI has huge potential to improve how work gets done but it also has the potential to create unfairness and inequality. It is critical to monitor and review development periodically as a key part of an organization’s responsible approach to AI.

How CHROs can apply the principles

One of the key drivers behind these principles is that they are totally open source, so any CHRO can take them and tweak them to perfectly fit their organization. Kevin Cox, recently retired CHRO at GE who co-led this project with me, explains the thinking behind this:

“Freely sharing thoughts and ideas will result in the most rapid progress and adoption of best practices. If CHROs begin to embrace the promise—and the responsibilities—of generative AI, we can help our organizations and our world create the future of work!”

The principles include some suggested governance, knowledge sharing, alignment, and change management practices to consider when applying them to your own organizations. Notable amongst these is the need for a clear change management plan to engage and educate all stakeholders about the impact of AI on work. This really resonates with me, and it circles back to my very first comments about engaging and involving staff. I share the view of another of my collaborators, GE’s former head of HR strategy, talent, and communications, Laura Cococcia, who says:

“The rapid progression of generative AI is capturing attention across industries and disciplines, reshaping business conversations about the intersection of AI, talent, and the workplace. Leading change thoughtfully is a critical part of how organizations navigate the future of work, especially when it comes to the responsible use of AI.”

Embracing AI responsibly: a path forward for organizations

As with previous innovation revolutions, AI will impact all of us. The sooner we can embrace the best parts of this new technology and thoughtfully build the right constraints to prevent harm, the greater and more positive AI’s impact will be on our world and on the world of work. That begins with empowering a workforce to experiment, collaborate across teams, think critically, and build responsibly. AI can augment our work, but at the end of the day, humans should still be making the decisions. And training is a critical part of enabling that.

The CHRO Principles for Responsible Use of AI in Organizations are a key tool to help CHROs lead this process in their organizations, and I’m looking forward to seeing how you will use it to create a culture of responsible AI usage that empowers people and organizations.

I’d love to hear what responsible AI in the workplace means and looks like for you in the comments below.

David Rubini

HR Manager | Head of HR

1 个月

Humans should lead the way in overseeing AI implementations, and that use of AI should balance experimentation and risk management to improve how work is done.

回复
Alisa Miller

Industry-Transforming C-Level Executive. Innovator in AI.

5 个月

Congrats!

回复
Husnain Khan

Catalysing Business Success with AI Recruiting and Automation: Revolutionising Hiring Results and Garnering Acclaim from 100+ Industry Leaders

5 个月

Christy, thanks for sharing!

回复
Elson P. Kuriakose

Global HR Executive I Board Member I Adjunct Faculty

6 个月

Thanks for sharing Christy Pambianchi .. Incredibly insightful- no surprise, considering that the absolute best minds in HR from both industry and academia have converged to address such a crucial topic, particularly at this pivotal moment of AI deployment in organizations.

要查看或添加评论,请登录

社区洞察