How Responsible AI Can Prepare You For Regulations
In our latest episode of the IBM AI Academy, Christina Montgomery illustrates how a transparent, open approach to AI for business can prepare businesses for growing regulation. ??
In May of 2023, just a few months after ChatGPT demonstrated the disruptive potential of generative AI, Christina Montgomery, Chief Privacy and Trust Officer at IBM and co-chair of IBM’s AI Ethics Board, Sam Altman, CEO OpenAI, and Gary Marcus, Professor Emeritus New York University, were invited to testify before the U.S. Senate Judiciary Committee for a hearing examining the oversight of AI and potential regulation for artificial intelligence.
“That day, I realized lawmakers and regulators were scrambling to understand the implications of AI. This would be a global debate AI ethics was about to become the most important conversation of our time.”
What is Responsible AI?
Ethics are a set of moral principles that guide decision-making. We all have instincts about what is right and wrong, but a consistent set of principles can help us work through complex decisions or novel scenarios.
Responsible AI is a set of principles that help guide the design, development, deployment and use of AI. These principles consider the broader societal impact of AI systems on stakeholder values, legal standards and ethical principles.
It seems like every day we hear of something new that AI can do. So, every day we must revisit the question of “what AI should do” – and where, where, how to use it. Like most technology, AI is a lever – it can be a tremendous efficiency boon or reputational bane that businesses must consider.
As you scale AI in your business for greater reach and impact, think about responsible AI at an institutional level, so that everyone can operate from a shared set of principles with defined guardrails.
Regulate the Use of the Technology, Not the Technology Itself
Divergent regulatory philosophies were made apparent at the U.S. Senate committee testimony last year – regulate the fundamental technology of AI itself or regulate the use and application of AI. Some of the loudest and most visible players in the AI space are saying that we should regulate the fundamental technology of AI itself and that a licensing regime should be established to control what and how AI gets built and by whom, effectively dictating who can participate in the AI marketplace. It’s heavy regulation, the way you might control the creation and licensing of nuclear power.
Heavy regulation and licensing of AI could consolidate the market around a small handful of companies – a winning proposition for the few companies with the resources to comply, but a losing proposition for everyone else. An AI licensing regime would be a serious blow to open innovation. From an ethical perspective, you have to ask whether it’s right, just or fair for a few companies to have such an outsized influence on people’s daily lives?
"AI is going to touch every aspect of business and society, so shouldn’t it be built by the many, not the few? And shouldn’t we hear from not just the loudest voices, but from many voices?”
It’s not practical to regulate technology granularly in the face of rapid innovation. Before the ink is dry on a new piece of regulation, technologists will have rolled out many alternative approaches to achieve the same outcome, and it’s the outcomes that really matter.
IBM supports a regulatory approach based not on the restriction of core technology, but on the responsible application of technology.
Not all uses of AI carry the same level of risk, and because each AI application is unique, it is critical that regulation must account for the context in which AI is deployed and ensure that AI used to help determine whether you qualify for a loan is regulated more closely that AI that provides you with restaurant recommendations, for example.
“We also believe that those who create and deploy AI should be accountable, not immune from liability.?It is essential to find the right balance between innovation and accountability. “
The support for this regulatory perspective is one of the reasons IBM and Meta co-founded the AI Alliance with a diverse group of corporate partners, startups, and academic and research institutions. It’s IBM we joined the consortium to support the US AI Safety Institute at NIST.
Whatever comes next for AI, it’s going to be safer if it’s open, transparent and inclusive- exemplified by IBM’s commitment to open-source modeling.
Practical, Risked-Based AI Regulation
While the debate around these competing regulatory approaches is still very active, IBM believes the European Union AI Act – nearing adoption and will roll out in stages over the course next few years – is a likely model for the rest of the world.
IBM has supported the EU AI Act for a few reasons:
?? The law introduces a risk-based approach to regulate AI systems:
Examples:
领英推荐
?? Requirement for transparency:
Users must be provided with?clear and understandable information about the system's purpose, functionality, and intended use. This includes information about any biases or limitations that may affect the system's performance.
?? Requirements for human oversight:
Such as human-in-the-loop systems, to ensure that AI systems remain aligned with human values and expectations.
?? Standards for data quality and fairness:
Data governance and data provenance are critical for responsible AI deployment. Understanding where the data used to train a model came from, ensuring you have the right to use it, ensuring that the data isn’t biased and that it respects copyright law—those are all issues addressed by the act. AI is not a shield to liability, and the act makes it clear these systems cannot be used to discriminate against people based on attributes like race, ethnicity, religion, or sexual orientation.
?? Requirements for safety and security:
You’ll have to be able to?demonstrate compliance with these standards or face serious consequences. Fines can be up to 35 million euros or 7% of a company’s annual revenue—whichever is higher.
IBM’s Ethical Principles in Practice
Trust is central to any company’s brand. It’s one thing to have ethical principles, but they are meaningless without a mechanism for holding yourself accountable.
Any organization using AI at scale needs an AI ethics board or equivalent governing mechanism.
It’s important to make your AI decisions in an environment of open consideration and debate with a diverse group of others who are viewing the business through the lens of ethics, and who bring different backgrounds, domain expertise and experiences into that debate – lawyers, policy professionals, communications professionals, HR, Researchers, Sellers, product teams and more. Ultimately, the judgment call is better served by the board than individual actors.
Not because the individuals aren’t ethical, but because, in addition to bringing cross-disciplinary expertise, the board proactively creates an environment insulated from the immediate pressure of performance and revenue. And then, through that board, you work to build an ethics framework into your corporate practices and instill a culture of trustworthy AI – and ensure you have mechanisms to hold your company accountable.
At IBM, the heart of our framework is based on our values and principles around AI and other emerging technologies – this approach is broadly applicable and may inform your explorations.
Our 3 Core Principles:
1.???? The purpose of AI is to augment human intelligence: We believe AI should make all of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few.
2.???? Data and insights belong to their creator: Our clients’ data is their data, and their insights are their insights. We believe that government data policies should be fair and equitable and prioritize openness.
3.???? Technology must be transparent and explainable: Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into their algorithms’ recommendations.
Our 5 Pillars:
1. Explainability. Good design does not have to sacrifice transparency in creating a seamless experience. We can—and should—do both.
2. Fairness. If AI is properly calibrated, it can assist humans in making fairer choices. It can help us overcome our own biases.
3. Robustness. AI must be secure and robust. Part of being able to trust your AI is knowing that it hasn’t been tampered with by malicious actors. And in turn, that helps consumers and clients trust your brand.
4. Transparency. Transparency reinforces trust, and the best way to promote transparency is through disclosure. Plainly identify when and how AI is at work in your business. We do that by publishing fact sheets. Like the nutrition label on the foods you eat.
5. Privacy. AI systems must prioritize privacy and data rights. When people allow you access to their data, that’s an act of trust. Privacy is central to any AI accountability framework.
No matter your business or AI use case, once you start defining your own principles and pillars you’ll find that we all have a lot in common. We all want to build strong, trusted brands. We all want to do the right thing.
Looking for more? Discover the IBM Enterprise Guide to AI Governance
AI Engineer
1 个月Stay ahead of the regulatory curve. Responsible AI governance isn't just compliance; it's a competitive edge.
--
4 个月https://alison.com/?utm_source=alison_user&utm_medium=affiliates&utm_campaign=40723578
We echo the importance of responsible AI practices, aligning with our commitment to ethical innovation. Our team is dedicated to supporting organizations in implementing AI solutions that prioritize trust, privacy, and compliance. Together, let's shape the future of AI with integrity! #CodePinnacleInnovation #ResponsibleAI #Trust
Responsible AI governance is crucial in navigating the evolving regulatory landscape. At Abstrabit Technologies, we emphasize ethical AI practices to stay competitive and compliant.
In-House Commercial Attorney and Trusted Advisor
5 个月Very informative