Breathtaking AI: Transforming, reflecting and being ethical
DALL-E and MidJourney images exploded onto the internet recently as more and more people gained access to these machine-learning powered tools. Curious? Then go sign up (it’s free and simple).
Both DALL-E and MidJourney impose restrictions on what users can generate (DALL-E: “Our content policy does not allow users to generate violent, adult, or political content, among other categories. We won’t generate images if our filters identify text prompts and image uploads that may violate our policies.”)
A certain set of ethics are therefore embedded in the usage of these tools — and the ones made explicit in this policy are just scratching the surface of the discussions, debates, and decisions that take place as machine-learning models are constructed, trained and deployed.
OK, so...what is ethical AI?
This month I had the opportunity to team up with our Experience Consulting team here PwC Nederland to dive deep into that incredibly interesting topic. There are so many perspectives, ideas and points of view in this emerging and dynamic space of ethical AI that I ended up falling down a couple of rabbit holes while researching.
And I loved every moment of it.
With “Ethical AI” as our touchstone, we covered a huge amount of ground, as it’s such a broad space. There are technical, economic, social, societal, psychological, humanitarian, indigenous, futuristic, historical, gendered, cultural ... and so many more perspectives.
What are you aware of in the ethical AI space?
Take a moment.
I didn’t give it much thought before last month, and was mostly only aware of things that had been picked up by the media. And having scratched the surface, it’s super clear that I’m still mostly unaware. Yet the impact and the implications of AI on us as individuals, communities, societies and as humanity are profound.
Seven articles and papers cannot do any justice to the multiple voices in the space, and that’s not my intention here. Instead, these are simply the ideas and arguments that stood out for me: thought-provoking, fresh and with an underlying drive to encourage us all to do better in this emerging field.
Technology Changes What You Believe
Let’s start with Dr Juan Enriquez’s TEDxJohannesburg chat on “How Technology Transforms Our Ethics”. He doesn’t specifically mention AI but talks about technology in general – a reminder that AI ethics discussions take place in a broader digital and tech ethics context. He argues that technology changes our ethics by presenting us with better alternatives (lab-created meat; electric vehicles) which, as they become more and more mainstream, make the pre-technology practice unethical.
Rules can — and do — change. What’s considered ethical can, already has, and will continue to change.
领英推荐
The Un-united Nations of AI
Talking of change, let’s zoom into the case of self-driving/autonomous cars. This Evil AI Cartoon made me laugh with its caption (“Approaching Swiss border. Activating strict rule-abiding mode”). The short post asks the question of whether we should have different AI ethics for different countries, that better reflect their various cultural values — instead of promoting universal, culturally invasive ethics. So, for example, your car’s behavior would change depending on the territory you were traveling through. I highly recommend y’all check out the other cartoons, and the dilemmas and questions they address!
Who Gets to Be Involved?
If we pick up on the idea of “reflection”, let’s take a moment to acknowledge that the AI space is, roughly speaking, pretty much dominated by Western, male, English-speaking folks. That has significant implications for development in the space, making Dr Tina Park’s punchy whitepaper on the concept of Inclusive AI all the more valuable.
You can easily argue that adhering to practicing inclusive participatory AI is pie-in-the-sky, too expensive, idealistic thinking. And yes, it’s definitely easier and cheaper (in the short term) to design for people, instead of with them. Yet inclusive AI makes for good business: as Dr Park notes, “Understanding the needs of a more diverse set of people expands the market for a given product or service. Once engaged, these people can then further improve an AI/ML product, identifying issues like bias in algorithmic systems.” Long term perspectives, and moral humility, as the example of the kidney allocation algorithm already shows, are a valuable thing.
Nesta, the UK’s social innovation agency, have a really interesting working paper on participatory AI within humanitarian innovation: business and corporations are not the only ones who are benefiting from the possibilities AI brings. Within the first couple pages it acknowledges, however, that “using participatory methods to shape the development of AI technology is still far from the norm” and that it’s mostly driven by academics.
I really hope that changes.
Robochicken
The pieces so far have taken a look at how AI impacts us as humans...yet “there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals.” Peter Singer and Yip Fai Tse argue that with AI systems likely to have significant impacts on billions of animals, the ethical implications are of very real concern for AI ethics. The mantra of 'Human-centered design’ means we are blinkered when it comes to the bigger picture of our planet.
Finally, I will freely admit I am basically cheating at this point by including Dr Anna Jobin’s dive into the ethics of AI (you’ll see why when you visit the page). But I’m gonna do it anyway because if you’ve made it this far, I’m pretty certain you’ll enjoy it.
Again, this tiny selection doesn’t even begin to illustrate the full breadth of issues that ethical and responsible AI necessarily includes. But they’re a start.
Let me know which one intrigued you the most.
Quick disclaimer here: views expressed in this post are all my own, in a personal capacity.
ESG Innovation @ PwC | Wellbeing @ Steady Rhino
2 年What a brilliant read. Love your brain Cate