AI ethics - deep dive notes

AI ethics - deep dive notes

This article summarizes a deep dive session on AI Ethics from the AI Academy club. The article is written by club member Helin Yontar. You will find practical questions for today’s problems related to the ethical use of AI and the brainstorming session results on how to have ethical AI.

We are not discussing AI Ethics because it is the next important problem of tomorrow. We are discussing it because it was already a problem yesterday.?

Humanity has thousands of years of knowledge in fields such as engineering and medicine. Considering the amount of experience and knowledge acquired in these fields, there are standard guidelines of particular do’s and don'ts. On the other hand, AI is a relatively new area, so it is challenging to foresee possible outcomes and have a set of guidelines. That is why AI Ethics plays a crucial role as the field that searches for guidelines that advise on the design and outcomes of artificial intelligence.?

The ethical problems AI faces today

During the deep dive session, we analyzed case studies regarding the non-ethical outcomes of AI algorithms. These include examples of gender bias on face recognition, usage of inaccurate AI tools in hiring, racial biases caused by wrong design, and many more. While bias and discrimination have always been present in society, AI can perpetuate these problems at an unprecedented scale, impacting billions of people. While analyzing these issues, we should not forget that artificial intelligence already powers social media platforms, search engines, and other technologies that have become part of our everyday lives.?

A case study we analyzed was about Clearview AI. An American facial recognition company that provides software to law enforcement, universities, companies, and individuals. The company's algorithm matches faces to a database of more than three billion images, and the app claims to be 99.6% accurate. Governments and institutions have used this technology to solve criminal cases.?

Face recognition isn’t an inherently unethical application of AI. In February 2021, two men in the Indiana State, US had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so the police had a still of the gunman’s face to run through Clearview’s app. The police were able to find the aggressor within 20 minutes. We can all agree that more efficient law enforcement is a desirable outcome, but the problem with Clearview is that its database was collected by scraping images from social media accounts without getting consent.?

When the Swedish Police Authority started using Clearview, the Swedish Authority for Privacy Protection initiated an investigation against the Police, which led to an administrative fine of approximately 250,000 euros on the Police Authority for infringements of the Criminal Data Act. Yet, entities continue to use this extremely accurate technology despite the database of images scraped from social media without consent.?

‘We are in a technological and societal context where everything is networked, everything is a piece of something else, everything can be utilized across the board intentionally or unintentionally.’

According to the Lead of Global Range Digital Ethics in IKEA Range Giovanni Leoni, something can be developed ethically but can have unethical repercussions on seemingly unrelated areas years down the line. Deepfakes are an example of this dynamic. In 2017 a Reddit user started posting pornographic videos featuring movie star Gal Gadot. She had never shot these videos: her face was put on a pornstar's body using AI algorithms. In an interview with Vice, the Reddit user claimed that he didn’t have any particular AI training, but he leveraged an open-source paper built by chip manufacturer NVIDIA for different applications. Today, there are tens of open source projects that allow anyone with basic computer knowledge to produce their own deepfakes, even leveraging free cloud computing resources.

Einstein's famous discovery that energy and mass were different forms of the same thing had set the stage for creating atomic bombs even though he was not thinking about this theory as a weapon. The result was a combination of many other actors. Just like in this discovery, technological developments are not necessarily unethical. Sometimes a multitude of combining technologies and data sets with wrong intentions towards the end create a negative impact, while the singular application itself does not necessarily seem to be an issue.?

In AI ethics, the common trait is humans; at the end of the day, it is a people problem.?

Giovanni also added that we often think about AI applications and datasets as stand-alone, but technology is designed to be combined with other innovations, repurposed, modified.

This raises tough questions in attributing responsibility. In the deepfake’s case, there are tens of actors involved: NVIDIA who produced the first “general” research paper, Google releasing open-source frameworks reducing the barrier to use that technology, the redditor who built the first applications, Github that allowed people to share the code to produce these deepfakes, the websites that hosted them, etc.?

Proceduralizing artificial intelligence has a significant role in defining responsibility. Often the people who build the technologies think about a small percentage of the consequences simply because it is not their job or focus. In robotics, there are already systems where you have groups of players involved, and each of these players can take a specific step of responsibility. The developers can’t be responsible for the unpredictable behavior of people who use their system, but they are responsible for putting limits on what is reasonably foreseeable.?

During our discussion, Léonard Van Rompaey who is an expert in AI Law introduced an interesting concept in legal studies called Procedural justice. The idea is that ‘Justice doesn't just need to be done, but it needs to be seen to be done.’ Therefore, having procedures and processes that are clear and enforced, which ensure transparency, visibility, and accountability, still makes a difference independent from the outcome.?

Open questions

The discussion around AI ethics is still young, and many questions remain unanswered. One topic we focused on is the relationship between transparency and accuracy.

We often hear lawmakers and companies giving value to “interpretable” or “transparent” AI. This means prioritizing algorithms that can give us an understanding of how they used data to produce a specific prediction.

However, progresses in AI seem to push towards more opaque technologies, that are hard to interpret but often have higher prediction accuracy than older, more transparent approaches. Suppose you need to choose an algorithm to diagnose a specific disease, and you can choose between two options:

  • Algorithm A is fully transparent (you can interpret why it made a diagnosis), and has an accuracy of 95% (out of 100 diagnoses, 5 are wrong)
  • Algorithm B is a black box (it’s impossible to interpret the reasons behind its diagnoses), and has an accuracy of 98% (out of 100 diagnoses, 2 are wrong)

Which algorithm should you use? Choice A allows doctors and patients to interpret predictions but misdiagnoses 3 persons out of 100 more than choice B. Is transparency worth the potential cost in human lives? Answering these questions is not trivial.

How can we have ethical AI??

European Unions' response to unethical use of AI is more law: The EU AI Act. Léonard also mentioned that instead of creating specific outcomes and necessities, The EU AI Act makes broad requirements, making it a piece of legislation that creates a source of trust and empowers producers. Even if the actual enforcement will take at least a couple of years, the EU AI Act is still an indicator of public sentiment that people care what technology does at scale. It will support handling some of the most extreme applications of AI and encourage more legislation.?

The most important thing is that it creates awareness that companies and organizations should handle data and algorithmic decision-making with care. We cannot relent control to the few people who know technology. Instead, we should empower people in decision-making positions and users affected by it. The EU AI Act indicates a much bigger awakening around the topic where people demand how our lives are created by technology. It will also steer innovation towards solutions that are more representative of the people affected by it.

The discussion concluded that, while there is no single answer to how we may have ethical AI, the area will expand and we must be more aware of it. We should acknowledge that we will see a multitude of unethical uses. It is increasingly important to take a position on what we believe is acceptable and unacceptable, to be conscious, and to do continuous risk assessments on the issues while developing or introducing new technologies and data sets. This is, as Giovanni stated, a networked process.

Special thanks to the participants of this session: Giovanni Leoni, Leonard Van Rompaey, Francesco Bellanca, Helin Yontar.

Want to check out the recording of the session, and get access to deep dives on other tech topics that matter? Join the AI Academy club.

要查看或添加评论,请登录

Gianluca Mauro的更多文章

社区洞察

其他会员也浏览了