AI: The Only Way Is Ethics
Image generated using DALL-E

AI: The Only Way Is Ethics

I’m standing for the board of CW, in part because I want to help the organisation and in part because I’ve learnt so much from it. One of the programmes that CW ran, way back in 2018 shows just how far ahead of the curve it can be.

One of the best events I’ve ever been to was held at the Bradfield Centre was a collaboration between the Cambridge chapter of the Institute of Directors and CW. I’ve always enjoyed publicising CW and so I wrote the event up for The Church Of England Newspaper. Sadly it’s behind a paywall .

What was a special CW debate back in November 2018 is now the kind of thing you hear on The Today Programme. CW has always been good at leading.

You don’t expect a debate on artificial intelligence to turn into a scrap between ethicists, but such was the way at the Bradfield Centre debate. With history (and God) on his side was Revd Dr Malcom Brown, Director of Mission and Public Affairs, The Archbishops’ Council of the Church of England, and with academic rigour on her side was Dr Kathleen Richardson, Professor of Ethics and Culture of Robots and AI at De Montfort University.

The debate was entitled “AI: Threat or Opportunity”, there is no need for a spoiler alert, when you have a title that that the answer is always “both”, but the need for ethics was rapidly identified as the moderating factor which needs to govern AI.?But whose ethics? Dr Brown’s role for the Church was to apply scientific scrutiny to secular areas of life and produce policy before clergymen pontificate about technologies they do not understand. Dr Richardson led research is based on humanistic anthropology, grounded in an ontological understanding of what it means to be human, and the primary importance of human relationship in our lives.

Dr Richardson argued that the ethics upon which we must set the rules for artificial intelligence must be grounded in modern values, in an understanding of the value of the self and to repair the damage done by a society with a hierarchy based on property. That Aristotle was a poor guide to ethics based on outdated views of women and power. Dr Brown contended that using Aristotle was a starting point, there had been millennia of refinement to his thought and it was that upon which the moral model for software should be built. Dr Richardson did not agree, arguing that those millennia of thought were dominated by a narrow, elitist, male society, and only a modern perspective was suitable for a modern problem like thinking machines. She explained that ethics is seen as being right or wrong, but it’s more complicated than that. You can build ethical arguments for pretty much anything, be it nuclear weapons or prostitution, but what matters is not the ethics or the future not the ethics of the past.

Not only did Dr Brown disagree he was heartened by the reaction he has had from data scientists to the involvement of the church in the debate on AI, in other areas of science he has been shut out, he said he’d been told “If you believe in the tooth fairy, we won’t talk to you”, but those researching AI were keen to learn from him, and the church, on the understanding of philosophers of the past. He argued that it is imperative that we face up to the problems of AI, it has been billed as another industrial revolution and the agricultural and industrial revolutions of the past have been painful experiences, particularly for the less well off in society. He doesn’t believe that Government is prepared for the change ahead, while there was an emphasis in building skills there was no idea on what skills needed to be built.

Two ethicists, taking radically different views made for a stimulating evening. You can ignore the irony of the clergyman favouring an evolutionary approach.

Fortunately, there was a highly distinguished panel to moderate the discussion and explore other areas of the issues surrounding AI. Lord Clement-Jones, Lib-Dem Peer and chair of the House of Lords AI committee has spent a huge amount of time looking at the impact of AI, but as a lawyer one of the areas which undoubtedly interests him is the research being conducted at the University of Cambridge by fellow panellist Dr Christopher Markou who is looking at “Artificial Intelligence and Legal Evolution”. This is a project to see how much of the work done by lawyers can be replaced by AI. They were backed up by Dr Karina Vold of the Leverhulme Centre for the Future of Intelligence, Fernando Lucini, Managing Director, Artificial Intelligence at Accenture Digital and Tim Ensor, AI commercial director at Cambridge Consultants. An embarrassment of experts.

Neither the threat nor the opportunity the CW event discussed nearly half a decade ago has been adequately explored.

Members of Cambridge Wireless often live in the future, and the ethics around AI mean that they have to embrace that responsibility.

My background is in telecoms, but if I’m elected to a seat on the CW board I’d be keen for the organisation to develop the work it’s been doing on AI and ethics in science in general. If you are a voting member of CW I would be very grateful if you would support my candidature. You’ll have details of how to do so in your email.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了