How does Artificial Intelligence impact Human Rights? 5 questions with Jan Kleijssen at the Council of Europe.
Rhodes Trust
Home of the Rhodes Scholarships, supporting exceptional global students for postgraduate study at Oxford.
Jan Kleijssen will be speaking at our inaugural Technology & Society Conference on 12 November 2022. He heads up The Information Society and Action against Crime Directorate at the Council of Europe in Strasbourg – which is in the process of developing the world’s first treaty on Artificial Intelligence (AI) and Human Rights.
Machine learning and automated decision-making is being used increasingly by technology companies and governments. Yet scandals have already indicated it lacks governance. We asked Jan five questions to understand how – as a society – we can ensure that that AI systems are accountable, both to their creators, and to the public at large.
Should harmful content be allowed to remain online to protect free speech and expression?
What applies offline also applies online. There is an article in the European Convention of Human Rights that guarantees freedom of expression. But, it can be restricted in particular cases. These cases have to be very well argued, they have to be necessary, and they have to be proportionate. Speech that would create harm can never be protected by freedom of expression.
Who is ultimately responsible for keeping us safe online?
People, not machines. It is clear that the responsibility lies with the people that run the systems. It does become complicated when harmful content is spread by servers and promoted by algorithms, and this varies from case to case. But ultimately it is the responsibility of the people who spread it online, put it online, or allow it to be online in the first place.
领英推荐
Are algorithms undermining democracy?
Algorithms are like any other technology. They are like engines, or fans, or chairs. In themselves, they do what they are set up to do. If algorithms are inappropriately used, not supervised and running on biased data they can be extremely harmful. In the Netherlands, 26,000 people were wrongly identified as committing child benefit fraud. The human impact was an absolute tragedy that cost lives and livelihoods. It’s a real example of what can go wrong when you mis-use a powerful technology.
How do we prevent discrimination or bias in the construction of AI systems?
Human rights must start in the design phase, it should never be an afterthought. The data that trains the algorithm should be of the highest possible quality, it should be transparent and regularly assessed. Even with precautions, things go wrong, so there must be human oversight. Ideally the algorithm should be tested in a sandbox environment first, before it is rolled out to the wider world. And those affected by the outcome should have the chance to appeal. All of this would have avoided the terrible situation that happened in the Netherlands.
Which areas of people’s lives do you think will be most impacted by AI in the next decade?
Absolutely every aspect. AI is already used in so many ways. Your phone records what you do, who you are in contact with, when you sleep. It knows everything about you. Collecting, combining and processing data through AI can and will have great benefits. But there might be serious consequences if this is abused. It can be extremely dangerous if the right safeguards are not in place.
How do we ensure the safe application of AI? Join us to discuss this question and more at the Rhodes Technology and Society Forum. We’ll explore the values represented in our technology today, and how to shape future technology to support dignity, equity, and governance accountable to all.