Four Frameworks to Guide Ethical and Responsible AI Use

Four Frameworks to Guide Ethical and Responsible AI Use

Welcome to Leading Disruption, a weekly letter about disruptive leadership in a transforming world. Every week we’ll discover how the best leaders set strategy, build culture, and manage uncertainty all in service of driving disruptive, transformative growth. For more insights like these, join my private email list.

AI is amazing.

It can create a beautiful piece of art using images produced by other artists. But that work is often copyrighted, so who owns the intellectual property? What rights do they have? How do you be transparent about the IP you’re using? How do you pay the original creators? Do you pay the original creators?!

Opinions vary. (Naturally.)?

In our highly charged, highly polarized society, it’s hard to get anybody to agree on what’s right and what’s wrong. Establishing shared values is a challenge.

You might think AI is an area with a clear delineation between right and wrong. But what you see as helpful, right, or good, someone else could interpret as unhelpful, wrong, or bad.

This is the challenge for leaders: As we develop and use AI in our organizations, we have to be clear about the values that inform our tools and govern AI's responsible and ethical use.?

So where do you start? Before implementing policies and procedures, you must understand the implications.

AI isn’t always perfect…or ethical

We need to understand that AI is a tool. We are the ones building it. We’re the ones guiding the output. And that comes with a unique set of problems:

  1. The benefits of AI could be distributed unevenly. We need to understand what the true impact will be. Depending on your circumstances, you might not have the same opportunity to tap into AI as others. If not everyone has equitable access to AI tools, if some people experience benefits and others will be harmed, what responsibility do we have to address that inequity??
  2. The spread of (mis)information could accelerate. AI can create tons of information and quickly distribute it on social media. Whether that information is helpful or not is in the eye of the beholder, but how do we discern what’s good information? How do we know what was created by humans? How do we know what’s written from a biased perspective? How do we think about misinformation??
  3. The way we train AI matters. Many people think AI should be free of bias. But what if you want bias? What if you want to train AI on a particular point of view? We know that AI tends to apply biases already present in the data, so biases could be amplified — and that could be good or bad. But we must be aware of that possibility and intentional about how we train AI.?

As leaders, we're excited to see what happens because AI is developing so quickly. We need to be aware of the ethical implications and our responsibility to anticipate the impacts.?

We need to define a common starting point

There’s no right way to do this. No one “right” answers. But we need to put frameworks in place that contribute to the ethical use of AI. Here are a few models I like that disruptive leaders can take inspiration from:

  • Microsoft’s Framework for Responsible AI is broken into two areas: ethical and explainable. It poses a series of fundamental questions: Can you trust that AI is basically fair, inclusive, and accountable? Can you explain what AI has done? Is there transparency around how the data is used? Is the data secure?
  • The White House developed the AI Bill of Rights, which outlines citizens' rights. We have a right to safe and effective systems that protect us from inappropriate, irrelevant, nondiscriminatory data. We have some agency over how raw data is used. We have a right to know how an automated system is being used. And we can opt out of AI and talk to a human if we’d like. I appreciate how this model is from a user’s perspective!
  • The EU created the Artificial Intelligence Act, which defines four risk areas — each with different levels of regulation. The first is no risk, no regulation and applies to things like spam filters. Limited risk, which includes chatbots, is focused on transparency and how users interact with AI. The third category is high risk, which has a lot more regulation because it could cause harm or limit opportunities. This includes facial recognition and AI that offers legal advice or performs HR functions. Then, there’s unacceptable risk: categories that clearly threaten people. I like this model because it acknowledges that not all AI tools are built the same — and that AI is just a tool. Humans are what need to be regulated.??
  • The National Institute of Standards and Technology has a framework that involves mapping, measuring, and managing. It defines what is good, what is bad, and the impacts of AI. It sets benchmarks that track and assess the benefits and risks. And it identifies the need for tools to mitigate the risks. It’s all about anticipating the issues and putting risk management resources in place now — not when there’s a problem.?

Regardless of the framework you adopt, leaders need to consider AI’s role in and impact on their organizations. Is AI a fantastic tool? Yes, but it needs governance because it’s accompanied by ethical implications that will become a highly disruptive force in our communities.?

You may be thinking, “Oh, not governance!” Instead of looking at processes and policies as stoplights, think of them as guardrails that allow everyone to go fast on the AI superhighway. A few simple rules of the road will ensure there aren’t major accidents down the road.?

Next week, I’ll be joined by my colleague, Scott Siegel, for a discussion about neuroscience analytics. This is another area rife with ethical issues since it collects data and uses it to understand what people pay attention to — especially in retail environments. I hope you’ll join me on Tuesday, April 4 at 9 am PT for this conversation!?

Your Turn

What concerns you about AI? What do you think could go wrong? (Remember, we’re not talking about Terminator-style Skynet here!) Are you worried that it won’t understand you? That it will give you incorrect information? Are you afraid you won’t be able to trust it — or the answers it gives you? I’d love to hear what you think!

Sami Ur Rehman

Free Games and Software

1 年

https://she909.blogspot.com/2023/04/Maximizing-Medical%20.html When it comes to maximizing medical insurance savings, comparing group health insurance and individual major medical coverage can be a good starting point. Group health insurance is often offered through an employer, association, or other organization, and may offer lower premiums due to group buying power. However, group plans may have limited coverage options and may not be customizable to individual needs. On the other hand, individual major medical coverage can provide more flexibility and customizable coverage options, but may have higher premiums due to individual risk factors. It's important to compare the costs and coverage options of both types of insurance to determine which one is the best fit for your specific situation. Additionally, it may be helpful to consider other factors such as deductibles, co-payments, and out-of-pocket maximums when making a decision.

  • 该图片无替代文字
回复
Zeyadhossam Hossam

Attended PPA Business School

1 年
回复

AI is transforming industries and raising ethical concerns. Stay informed by subscribing to Good AI Vibes, a newsletter featuring AI business applications and use cases. Understand how to adapt and implement AI responsibly in your career and organizations. Join the Good AI Vibes community here: https://goodaivibes.substack.com/ #GoodAIVibes #AI #Ethics #ResponsibleAI

William Tarpai

Achieving successful Sustainable Development Goals outcomes in the US and Globally

1 年

Charlene Li - great post - I am planning to be in New York next week, and hope to discuss using AI to speed up the pace for achieving successful SDG outcomes at the UN. I'm certainly hoping EU and US UN offices members become followers of your page.

Samuel Cole, MBA

Chief Executive Officer at Campus Millionaires Club Corporation -DIGITAL PROXIMITY MARKETING EXECUTIVE TEXT 423.483.5741

1 年

GOOD use of AI in Technology: https://about.me/samkolemba Yes, we use AI for good in Technology - we currently use AI in Our HUB Design and you will find the creativity of AI in MAiHUBest LLC. We are teaching in institutions of higher learning around the world the use of AI to people interested in our HUB Design University projects.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了