Product managers: Ethics and AI

Product managers: Ethics and AI

"First, do no harm."

The European Union has proposed a legislative framework for regulating the development and deployment of AI within the EU. The European Parliament plans to vote on adopting the regulation by the end of 2023.

In April 2023, tech leaders including Elon Musk and Steve Wozniak signed an open letter published by the Future of Life Institute calling for a six-month moratorium on all training of AI systems more powerful than GPT-4. Former Microsoft CEO Bill Gates responded by insisting it would not solve the challenges ahead. Computer scientist Andrew Ng, founder of Google Brain, called the moratorium “a terrible idea” because government intervention would be the only possible way to enforce it.

Honestly, I don't want governments involved in controlling work on AI innovations—or forbidding what they don't understand. After all, these are the folks who can barely use their phones and don’t know the difference between Facebook and TikTok.

Steven J. Vaughan-Nichols in There's no stopping AI Now :

By and large, our elected leaders don't have a clue about technology. So for better or worse, we, the big tech companies, and we, the business users of generative AI, will be the ones calling the shots.

I’m reminded of the Three Laws of Robotics devised in 1942 by prolific science author Isaac Asimov. They are:

  • First Law. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov also added a fourth, or zeroth law, to precede the others:

  • Zeroth Law. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

I wonder if governments or scientists or company leaders have read these laws—or even know about them.

And they are really just as simple as, “First, do no harm.”

Not “do no harm… unless it generates clicks.”

Not “do no harm… unless it generates revenue.”

Not “do no harm… unless it generates more visibility.”

While waiting for politicians and company leaders to catch up, product managers can take the lead. Brainstorm both good and bad scenarios for products and features. How could they be used and misused? How might they impact personal privacy and security? I’m sure (or I hope!) those who created social media platforms did not anticipate how they would be used to violate personal privacy and spread hate.

Product teams should be vigilant in considering the negative impacts of their strategic decisions.

What’s the harm in AI-generated results?

Let’s ask ChatPGT.

AI-generated results can have harmful consequences if they are biased, inaccurate, or used inappropriately. Here are a few examples:

  • Bias: If the AI model is trained on biased data or programmed with biased algorithms, it can produce biased results that may disproportionately affect certain groups of people. For example, an AI model used in the hiring process may inadvertently discriminate against candidates based on race or gender.
  • Inaccuracy: AI-generated results may not always be accurate, especially if the model has not been trained on enough data or the data it was trained on is not representative of the real-world. Inaccurate results can have serious consequences in fields such as healthcare, where a misdiagnosis or incorrect treatment recommendation could be life-threatening.
  • Inappropriate use: AI-generated results can be misused if they are used for purposes they were not intended for. For example, an AI model used to predict criminal behavior may be used to unfairly target certain individuals or groups.
  • Lack of transparency: AI-generated results can be difficult to interpret or understand, especially if the model is a "black box" that does not provide any explanation for its decisions. This lack of transparency can make it difficult to identify and correct any errors or biases in the model.

Overall, it's important to recognize that AI-generated results are not infallible and should be used with caution. It's important to consider the potential harms and biases of AI-generated results and to ensure that they are used ethically and responsibly.

But what about copyright violations and content appropriation?

Those of us who write about product strategy, planning, and growth hope you’ll come to our website for other insightful articles and videos. Many websites could become a wasteland if their content is absorbed into the borg of AI search.

Here's an easy fix that wouldn’t involve governments: AI search teams could voluntarily provide citations to their sources. (By the way, you can ask, “Please provide sources for the previous answer” or just “share citations.”)

Back in the day, Wikipedia let anyone write anything. Then the librarians got involved. They deleted most of the nonsense pages and required citations for statements.

Consider the ethical implications of your product’s features.

Transparency. Your software should be transparent about how it works, what data it collects, and how it uses that data. Users should have a clear understanding of what they agree to when they use your software.

Privacy. Users have a right to privacy, and you should protect their data. This means being transparent about what data is being collected, how it might be used, and giving users control over their data.

Accessibility. Your software should be accessible to all users, regardless of their abilities or disabilities. This means designing features that are easy to use and navigate, and providing alternative ways for users to interact with your software if needed.

Safety. Your software should not harm users or others. This means ensuring that your software is secure and cannot be used to harm or deceive users.

As a product manager, you have a responsibility to your users and the wider community to deliver ethical software that contributes positively to society. That includes considering how the feature could be misused to spread bias or hate.

This is an exciting time. Generative AI tools are making many lives easier.

But first, do no harm.

? Mike Hopkin

Driving Business Growth With Strategy, TRUST and Accountability | Author | Business Growth Strategist | Certified Profit Coach | Product Strategist

1 年

"As a product manager, you have a responsibility to your users and the wider community to deliver ethical software that contributes positively to society." This is an important point that is unfortunately too often forgotten, or worse, ignored.

John Billington

Sr. Product Manager - Consumer Hardware, Software, IoT and SaaS.

1 年

This is a great list of ethical considerations in product and I totally agree that PMs need to lead the way here and look at areas where potential harm can creep in. It's always good to ask the question: just because we can build this feature, should we?

Dutch deVries

Big Idea Strategist & Philosopher - Helping individuals and teams solve problems and improve the world around them, one step, one task, one goal, one solution at a time.

1 年

I agree 100% that Product Managers should not only consider what solutions need to do, but also what they shouldn’t do. It was more common mindset in the military programs I was involved with, and try to encourage it with others. You phrased it well: “Brainstorm both good and bad scenarios for products and features. How could they be used and misused?”

Geoffrey Moore's 1998 edition of Crossing the Chasm still captures my overall impression: "Today, however, AI has been relegated to the trash heap. Despite the fact that it was—and is—a very hot technology, and that it garnered strong support from the early adopters, who saw its potential for using computers to aid human decision making, it has simply never caught on as a product for the mainstream market."

Rick Morse

Executive Coach for Product Management and Innovation

1 年

This also reminds me of one of the most fundamental confusions that engineers can be sucked into: Precision is NOT Accuracy While precision is thrown around as such a highly sought after attribute in everything from company tag lines to TV ads, it is useless if the answers not also accurate. I come from the last generation to use a slide rule in school. I learned not to depend on a machine for the answer, but only for the precision of the answer. This lesson was brought home when I began working in robotics and found that while we could use better resolution sensors to improve precision. If the arms were not calibrated fully in 3 dimensions, they would place windshields THROUGH the front of the car. This not only spectacularly destroyed the windshield glass but also made a mess of the partially assembled car. How accurate a robot performed was way more important than how precise it used it's inputs to create a certain output. AI needs to be approached the same way. Use your knowledge of the answers to ensure you are accurate and then use AI tools to get greater precision. It is too easy to pollute today's information streams with false information that leads to erroneous results. Information is NOT Knowledge.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了