Unethical AI? It Learnt From Us: Should We Embrace Selective Training of AI
Future of Work, Ethics & Tech

Unethical AI? It Learnt From Us: Should We Embrace Selective Training of AI

We’ve all seen it, haven’t we? AI making decisions that are biased, discriminatory, or outright unethical. It could be a hiring algorithm that unfairly favors one group over another, or a chatbot spouting offensive remarks. When these incidents happen, we often hear the same refrain: “The AI is broken.” But is it? Or is it just mirroring the data we’ve fed it?

Let’s talk about the elephant in the room: AI didn’t invent unethical behavior. It learned it from us—humans. The datasets we use to train AI reflect our history, our biases, and our imperfections. So, should we embrace selective training of AI to build systems that reflect our better selves rather than our worst tendencies?

AI is Only as Good as Its Data

Let me give you an example. In 2018, Amazon scrapped an AI recruitment tool after discovering it was biased against women. The AI had been trained on résumés submitted over a 10-year period, most of which came from men. As a result, it downgraded applications that included the word “women” or referenced women’s colleges. Was the AI inherently sexist? No. It was simply reflecting the patterns in its training data.

Similarly, in 2016, Microsoft’s AI chatbot, Tay, was released onto Twitter and turned into a hate-spewing bot within 24 hours. Why? Because it learned from interactions with users who flooded it with offensive content.

These examples are stark reminders that AI doesn’t think or judge—it learns. And what it learns depends entirely on the data we provide.

The Case for Selective Training

If AI is shaped by its training data, shouldn’t we be more intentional about the data we use? Some argue that selective training is a slippery slope, but I believe it’s not just necessary—it’s ethical. Let me explain.

Take the healthcare industry as an example. AI systems are increasingly being used to diagnose diseases, predict patient outcomes, and even recommend treatments. But what happens if these systems are trained on datasets that primarily include patients from one demographic? You get algorithms that work well for one group and fail miserably for others. This is not just a flaw; it’s a life-or-death issue.

Selective training—choosing data that is diverse, representative, and free from harmful biases—can help ensure that AI in healthcare is equitable. Companies like IBM Watson Health are already moving in this direction by curating datasets that prioritize diversity.

Another compelling example comes from content moderation. Social media platforms like Facebook and Twitter use AI to flag harmful content. If these systems are trained without considering cultural nuances or context, they risk censoring harmless content or, worse, allowing harmful content to slip through. Selective training, tailored to the context in which the AI operates, can help these systems perform more ethically and effectively.

The Balancing Act: Ethics vs. Practicality

Of course, selective training isn’t a magic wand. It requires human oversight, which can introduce its own biases. It also demands more time, resources, and collaboration across diverse groups to ensure datasets are both representative and ethical. But the alternative—allowing AI to perpetuate and amplify existing inequalities—is far worse.

Here’s a practical approach:

  1. Audit Training Data: Regularly review datasets for biases and gaps.
  2. Diverse Development Teams: Include people from different backgrounds in AI development to identify blind spots.
  3. Iterative Learning: Allow AI systems to learn and adapt over time, correcting for biases as they emerge.
  4. Transparency: Companies must be open about the data and methodologies they use, inviting external scrutiny to build trust.

What I think we should do...

We can’t expect AI to be better than us if it learns exclusively from us. The question isn’t whether we should train AI selectively—it’s how we can do it responsibly.

As someone passionate about technology’s potential to transform the world, I believe we have a moral obligation to ensure AI doesn’t just mimic our world as it is but helps shape it into what it could be. Let’s teach AI to reflect our highest ideals, not our deepest flaws.

So, what do you think? Should we embrace selective training of AI as a standard practice, or is it a step too far? Let’s discuss—because the future of AI ethics isn’t just a tech issue; it’s a human one. Learn more from Courselana courses.

AUTHOR'S DECLARATION: I leverage AI for research and initial drafting of the key points in this article. I decide on the article's topic, create an appropriate prompt, and then use GPT-4 to search the web to generate relevant points for the article. However, to create the final article, I put together the points, add personal context, and edit to ensure it's beneficial to my audience, and so that it's NOT 100% AI-generated.        

要查看或添加评论,请登录

Benjamin Arunda的更多文章