Q: “Can I use AI this way?” A: It depends...
All the data collection incentivized by AI raises lots of privacy-related challenges. We’re past the preparation stage now – AI is here to stay – but the data collection, and the management needs it poses, means that organizations need to be proactive about how they’re going to stay compliant.??
One key tool to help you do this is to document an AI use policy. Ready to learn what that is, what it should include, and why you need one? Let’s dive in:
What is this??
An AI use policy lays out generally how your organization classifies AI systems and their associated risks, but it gets more specific than that. This policy should also document a list, created by your AI governance committee, of approved and prohibited AI systems, and should outline the procedure for getting new AI systems or use cases approved.??
In short, the AI use policy should be the rule book that anyone in your organization can look to when asking the question “Can I use AI this way?” and give them guidance for what to do when the answer is yes.??
How does it affect your company??
The goal of an AI use policy like this is to establish a baseline for the responsible, transparent, and accountable use of AI systems throughout your organization. The measures outlined in this policy are designed to stick to high ethical standards, and should align with your company’s overall values.??
Not only does this make a policy like this easier to write – you should be adapting existing values to account for AI, not creating new ones from scratch – but it also gives everyone from the top down a common understanding of what you will and won’t allow when it comes to AI. With everyone pulling in the same direction, you can ensure the safer use of AI systems and begin to build trust.??
How can you put it into practice??
Start by bringing in your responsible AI principles. These documents shouldn’t live in a vacuum, and should inform each other. Once you have that, you can broaden your AI use policy to include policy statements, which are broader protective shields that include controls and risk mitigation tools to set the foundation for safe AI use throughout your organization.??
Check out OneTrust’s AI policy statements (which include familiar ideas like accountability, data privacy, and third-party risk management) and learn how you can begin to create your own AI use policy.
领英推荐
Timeline: AI's emerging trends and journey?
Your AI 101: What are...???
The EU AI Act defines high-risk AI systems as those that pose significant risks to the health, safety, and fundamental rights of individuals. ?
Key provisions include mandatory risk assessments, transparency requirements, human oversight measures, and more. While this increases obligations for providers of high-risk AI, the goal is to ensure these systems can be trusted and do not infringe on rights and safety.?
If your organization uses or wants to use high-risk AI, you’ll need to get ready for some extra hoops to jump through before deploying that system.??
Check out this piece on the requirements of high-risk AI systems by Iain Borner , CEO of The Data Privacy Group Ltd.
Follow this human?
Ashley Casovan is the director of the AI Governance Center at IAPP . Ashley has her finger on the pulse of emerging regulation, industry events you don’t want to miss, and experts leading the field in research and legislation.
URBAN SPORTS- SPORBORSASI
6 个月Yes