Responsible AI: five steps businesses should take now
Cognizant was proud to participate recently in the World Economic Forum’s annual meeting in Davos, where the topic of artificial intelligence was high on the agenda—and that’s an understatement. We were also happy to contribute to the newly published AI Governance Alliance briefing papers on safe systems, responsible applications and resilient governance.
In Davos, we held in-depth conversations with hundreds of global leaders on the topic of responsible AI. We heard a broad range of perspectives on what focus and action is needed, but there is unanimous agreement that AI risks need to be better managed as an urgent priority.
It’s clear that trust will be at the core of successful AI adoption. Trust will enable us to scale and realize the potential of generative AI, the most revolutionary new technology in a generation. When they experience these disruptive new solutions that feel like magic, consumers will naturally be skeptical; trust will need to be earned from the start. And if that trust is lost, it will be difficult to reacquire.
Creating trusted AI
With trust so critical, we should start by understanding what it is and how it’s obtained. In 1995, professors from Notre Dame and Purdue published a model for trust that has become widely adopted. Highly applicable to AI-powered services, it proposes that trust derives from the perception of ability, benevolence, and integrity. What we heard at Davos aligns with this model and helps make sense of the challenges in front of us.
First, trust in AI systems rests on their ability to solve real-world problems and be useful. Ability isn’t something we can take for granted—I’ve seen amazing demonstrations of generative AI only to be slightly underwhelmed when trying out the tools in the real world.
AI solutions that over-promise and under-deliver will cause major trust issues in the long run. We’ve seen this problem before in the form of chatbots and voice assistants that promised conversational convenience—but delivered limited understanding and static decision trees. Users were underwhelmed, and these technologies’ promise went unfulfilled.
To make AI systems useful, we must focus them on the right problems, support them with relevant and high-quality data, and seamlessly integrate them into user experiences and workflows. Most importantly of all, continuous monitoring and testing is needed to ensure that AI systems deliver relevant, high-quality results.
The second area that drives trust is the idea of benevolence. AI models need to positively impact society, businesses and individuals, or they will be rejected. Here we face two core challenges:
领英推荐
Finally, integrity creates trust when users see that the services they consume are secure, private, resilient, and well governed.
Technologists and enterprises have spent decades building the web-scale infrastructures and cloud-native architectures that power mission-critical digital services. The practices that allow the world to rely on these services need to be extended and adapted to AI capabilities in a way that is transparent and convincing to user communities.
The only way to bring this requisite integrity is to adopt platforms that build in transparency, performance, security, privacy, and quality. Building point use cases in parallel, based on localized objectives and siloed data, is a dangerous path that will lead to increased cost and risk, worse outcomes, and ultimately a collapse of system integrity.
The challenge to implement responsible AI
While it’s well and good to have clarity regarding objectives, it’s also undeniable that we face a daunting challenge. Addressing responsible AI will require collaboration between the public and private sectors across a range of issues. It will also require the adoption of new practices within the enterprise to design, engineer, assure, and operate AI-powered systems in a responsible manner.
We don’t have the luxury of waiting for someone else to solve these challenges. Whatever your role and industry, you can be sure that competitors are pushing ahead with AI implementations, employees are covertly using untrusted solutions, and bad actors are devising new ways to attack and exploit weaknesses.
At Cognizant, we are helping to build responsible, enterprise-scale AI in hundreds of organizations, as well as within the core of our own business. Based on this experience, we believe enterprises need to act now in five areas:
With these five elements in place, organizations are set up to operationalize their position on responsible AI, enabling the enterprise to execute and govern activities effectively. We view this as an urgent priority for every organization that is adopting AI or is exposed to AI-powered threats.
To learn more, visit the Generative AI section of Cognizant's website.
That's a powerful message! Thank you for sharing.
CIO, CTO, CDO, SVP Product & Operations Executive, Angel Investor | Generative AI / Machine Learning & Data Strategy Innovation
7 个月Great article. One aspect that truly resonated with me was the emphasis on transparency and employee education regarding the tool. I think this is a fundamental step, and I believe that starting with this approach will lead to smoother digital transformations in the future.
Bcom E/m
7 个月I'm person deaf disability. My want job interview when sir . Mam
Student at Eshan Group Of Institutions
7 个月Thanks for sharing
Online Editor at tiket.com
7 个月Thanks for posting