Artificial Intelligence – where politics, principles and policies collide

Artificial Intelligence – where politics, principles and policies collide

Co-authored by: Elizabeth Crossick , Head of Government Affairs, EU & Global Policy Lead, AI at RELX and Jeremy Lilley , UK Government Affairs Manager at RELX

Technology fads come and go, often over-promising and underdelivering, but with Artificial Intelligence (AI) the hype is justified. AI represents one of the biggest transformations in human history. While various technological developments have not stood the test of time – think about the MiniDisc or Digital Cameras – AI shows no sign of becoming obsolete. Future technology trends are either based on AI or are being developed to enable the greater adoption of AI. AI is already changing how we live our lives and will continue to do so.

This is because AI is not simply the latest version of a pre-existing technology. It is a whole new approach. It is not a technology but a group of technologies. It will lead to a fundamental shift in the principles of markets and societies. It is leading to a change in the means of production, a gear change in how we approach life, and a driver of a whole new industrial revolution.

We, as citizens or societies, have of course tackled similar disruption in the past. What is different this time is the scale and pace of technological advancement with AI. As a result, people are asking fundamental questions about AI’s role in the world. Are we comfortable taking humans out of certain decision-making processes? Do we want a machine to determine how our cars drive or whether we are eligible for credit? Do we have trust in technology which is still relatively new? Questions about fairness, transparency, accountability, security, and accuracy crop up in every discussion on AI.

Jurisdictions around the globe are considering how to approach these questions from a regulatory or legislative perspective. Different approaches are beginning to emerge depending upon the way specific societies look at new technologies. In the European Union, it is a new dedicated rulebook aimed at the specifics of AI. The United Kingdom is looking at using existing regulatory structures to address risks associated with AI. The United States has been taking an agency-by-agency approach, with a focus on anti-trust, whilst Brazil is in the process of developing a specific piece of AI legislation.

While governments grapple with their plans to regulate or oversee AI, companies are already having to make decisions on AI governance. Global technology companies such as RELX which are developing and implementing AI at scale cannot wait until regulations or global standards emerge. As we consider how AI can best serve the public interest, we must be on the front foot when it comes to the risks involved and how we can responsibly use AI.

That is why RELX has recently published its Responsible AI Principles . Our principles set out how we approach AI governance at RELX, aiming for the highest ethical standards. AI has huge transformational potential, but we recognise that it comes with risks. Our principles, which we are now embedding across our businesses, seek to address and mitigate those risks. The process is iterative. We will adapt and refine our Responsible AI Principles from feedback we receive both internally and externally. ?

At the core of our Responsible AI Principles are people. As a largely business-to-business organisation it would be easy for us to shy away from how our products impact end users. But that is not how RELX operates. Our principles directly address the real-world impact of our solutions and put people at the centre. Privacy. Human oversight. Accountability. Bias avoidance. All of these are in place to ensure that people and the public interest are properly served by our solutions.

Alongside our principles we have also published a new policy paper on AI which offers our own thinking to governments and regulators about possible ways to approach AI regulation. We recognise there will be no one-size-fits-all approach to regulation but many of the themes being addressed by governments are remarkably similar. As a company that has been thinking carefully as to how to approach AI responsibly, we wanted to share our own thinking as a contribution to the ongoing debate around the future of AI governance and regulation.

AI governance will continue to be a journey, for governments as well as companies such as RELX. Openness, to processes and debate, along that journey will be key to finding a successful destination.?

Prof. Dr. Rene Schmidpeter

Professor / Studiengangsleitung bei Berner Fachhochschule BFH | Nachhaltige Gesch?ftstransformation

2 年
回复

Thanks and congrats Elizabeth & Jeremy for this cristal clear summary on AI !

Geoff Gibas

Professor of Practice UEFA -B Basic Licence

2 年

Mega interesting!

Mr. Ashley Moore

Certified IEEE AI Ethics Lead Assessor/AI Architect and Hard Law Influencer "Working to Protect Humanity from the potential harm A/IS may cause”. LinkedIn AI Governance, Risk and Conformity Group

2 年

The transformative impact of artificial intelligence on our society will have far-reaching economic, legal, political and regulatory implications that we need to be discussing and preparing for. Determining who is at fault if an autonomous vehicle hurts a pedestrian or how to manage a global autonomous arms race are just a couple of examples of the challenges to be faced. Will machines become super-intelligent and will humans eventually lose control? While there is debate around how likely this scenario will be we do know that there are always unforeseen consequences when new technology is introduced. Those unintended outcomes of artificial intelligence will likely challenge us all. Another issue is ensuring that AI doesn’t become so proficient at doing the job it was designed to do that it crosses over ethical or legal boundaries. While the original intent and goal of the AI is to benefit humanity, if it chooses to go about achieving the desired goal in a destructive (yet efficient way) it would negatively impact society. The AI algorithms must be built to align with the overarching goals of humans. IEEE CertifAIEd https://standards.ieee.org/industry-connections/ecpais/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了