Product managers: Ethics and AI
Steve Johnson
I help teams overcome the chaos in managing products. Author, speaker, guitar player, clean-shaven since 2024.
"First, do no harm."
The European Union has proposed a legislative framework for regulating the development and deployment of AI within the EU. The European Parliament plans to vote on adopting the regulation by the end of 2023.
In April 2023, tech leaders including Elon Musk and Steve Wozniak signed an open letter published by the Future of Life Institute calling for a six-month moratorium on all training of AI systems more powerful than GPT-4. Former Microsoft CEO Bill Gates responded by insisting it would not solve the challenges ahead. Computer scientist Andrew Ng, founder of Google Brain, called the moratorium “a terrible idea” because government intervention would be the only possible way to enforce it.
Honestly, I don't want governments involved in controlling work on AI innovations—or forbidding what they don't understand. After all, these are the folks who can barely use their phones and don’t know the difference between Facebook and TikTok.
Steven J. Vaughan-Nichols in There's no stopping AI Now :
By and large, our elected leaders don't have a clue about technology. So for better or worse, we, the big tech companies, and we, the business users of generative AI, will be the ones calling the shots.
I’m reminded of the Three Laws of Robotics devised in 1942 by prolific science author Isaac Asimov. They are:
Asimov also added a fourth, or zeroth law, to precede the others:
I wonder if governments or scientists or company leaders have read these laws—or even know about them.
And they are really just as simple as, “First, do no harm.”
Not “do no harm… unless it generates clicks.”
Not “do no harm… unless it generates revenue.”
Not “do no harm… unless it generates more visibility.”
While waiting for politicians and company leaders to catch up, product managers can take the lead. Brainstorm both good and bad scenarios for products and features. How could they be used and misused? How might they impact personal privacy and security? I’m sure (or I hope!) those who created social media platforms did not anticipate how they would be used to violate personal privacy and spread hate.
Product teams should be vigilant in considering the negative impacts of their strategic decisions.
领英推荐
What’s the harm in AI-generated results?
Let’s ask ChatPGT.
AI-generated results can have harmful consequences if they are biased, inaccurate, or used inappropriately. Here are a few examples:
Overall, it's important to recognize that AI-generated results are not infallible and should be used with caution. It's important to consider the potential harms and biases of AI-generated results and to ensure that they are used ethically and responsibly.
But what about copyright violations and content appropriation?
Those of us who write about product strategy, planning, and growth hope you’ll come to our website for other insightful articles and videos. Many websites could become a wasteland if their content is absorbed into the borg of AI search.
Here's an easy fix that wouldn’t involve governments: AI search teams could voluntarily provide citations to their sources. (By the way, you can ask, “Please provide sources for the previous answer” or just “share citations.”)
Back in the day, Wikipedia let anyone write anything. Then the librarians got involved. They deleted most of the nonsense pages and required citations for statements.
Consider the ethical implications of your product’s features.
Transparency. Your software should be transparent about how it works, what data it collects, and how it uses that data. Users should have a clear understanding of what they agree to when they use your software.
Privacy. Users have a right to privacy, and you should protect their data. This means being transparent about what data is being collected, how it might be used, and giving users control over their data.
Accessibility. Your software should be accessible to all users, regardless of their abilities or disabilities. This means designing features that are easy to use and navigate, and providing alternative ways for users to interact with your software if needed.
Safety. Your software should not harm users or others. This means ensuring that your software is secure and cannot be used to harm or deceive users.
As a product manager, you have a responsibility to your users and the wider community to deliver ethical software that contributes positively to society. That includes considering how the feature could be misused to spread bias or hate.
This is an exciting time. Generative AI tools are making many lives easier.
But first, do no harm.
Driving Business Growth With Strategy, TRUST and Accountability | Author | Business Growth Strategist | Certified Profit Coach | Product Strategist
1 年"As a product manager, you have a responsibility to your users and the wider community to deliver ethical software that contributes positively to society." This is an important point that is unfortunately too often forgotten, or worse, ignored.
Sr. Product Manager - Consumer Hardware, Software, IoT and SaaS.
1 年This is a great list of ethical considerations in product and I totally agree that PMs need to lead the way here and look at areas where potential harm can creep in. It's always good to ask the question: just because we can build this feature, should we?
Big Idea Strategist & Philosopher - Helping individuals and teams solve problems and improve the world around them, one step, one task, one goal, one solution at a time.
1 年I agree 100% that Product Managers should not only consider what solutions need to do, but also what they shouldn’t do. It was more common mindset in the military programs I was involved with, and try to encourage it with others. You phrased it well: “Brainstorm both good and bad scenarios for products and features. How could they be used and misused?”
Geoffrey Moore's 1998 edition of Crossing the Chasm still captures my overall impression: "Today, however, AI has been relegated to the trash heap. Despite the fact that it was—and is—a very hot technology, and that it garnered strong support from the early adopters, who saw its potential for using computers to aid human decision making, it has simply never caught on as a product for the mainstream market."
Executive Coach for Product Management and Innovation
1 年This also reminds me of one of the most fundamental confusions that engineers can be sucked into: Precision is NOT Accuracy While precision is thrown around as such a highly sought after attribute in everything from company tag lines to TV ads, it is useless if the answers not also accurate. I come from the last generation to use a slide rule in school. I learned not to depend on a machine for the answer, but only for the precision of the answer. This lesson was brought home when I began working in robotics and found that while we could use better resolution sensors to improve precision. If the arms were not calibrated fully in 3 dimensions, they would place windshields THROUGH the front of the car. This not only spectacularly destroyed the windshield glass but also made a mess of the partially assembled car. How accurate a robot performed was way more important than how precise it used it's inputs to create a certain output. AI needs to be approached the same way. Use your knowledge of the answers to ensure you are accurate and then use AI tools to get greater precision. It is too easy to pollute today's information streams with false information that leads to erroneous results. Information is NOT Knowledge.