To AI or not to AI?
A quick glance at the media shows that artificial intelligence—or AI—is the buzzword du jour. Siri, Alexa and Tesla are household names and examples of AI that is already in your house or garage.
It's no surprise that as AI integrates itself into our lives, some people are raising ethical and philosophical concerns. Tesla’s Elon Musk says AI is a fundamental risk to human civilisation. Tim Berners-Lee, the inventor of the Internet, wondered aloud at a recent conference about the “nightmare scenario where AI runs the financial world”, questioning whether robots, and the algorithms behind them, can make fair decisions about who should receive a mortgage, for example.
Others say we need to embrace the opportunity that AI presents. Facebook’s Mark Zuckerberg has described himself as “really optimistic” about the technology and branded those “who drum up doomsday scenarios” as “pretty irresponsible”.
As communicators it is not our role to pick sides in this debate. Instead– whether we seek to educate, advocate or protect a brand– our role is to help shape the debate around AI. There's no right answer or wrong answer– only a continued discussion about risk and reward. Here are three key questions to consider:
Who regulates the future?
When we think about the ethical risks of AI, one major question is who should decide the limits and applications of it. Leading technology companies rival nations for power and influence; their innovations far surpass regulations. CEOs ask forgiveness, not permission, when testing AI innovations that can affect our moods, decisions and potentially, even our futures. While this approach serves innovation, it doesn’t always serve the people providing the data. Can big companies be trusted to regulate themselves in our best interest?
My Data or Big Data?
Privacy concerns aren't limited to the private sector: one in two American adults is in a law enforcement face recognition network, a system that is broadly unregulated. So, while this application of AI may keep people safe by enabling law enforcement officials to easily identify criminals, it also raises troubling questions about our expectations of privacy– and the extent to which they may be false. At what point should AI serve the needs of the public at the expense of the individual?
Can a robot do wrong?
AI also raises tricky questions about liability. Imagine for a moment that you're checking email in your automated car when it suddenly causes an accident. Are you to blame for owning the car? Or, is the car's programmer to blame for creating a faulty algorithm? Or, is the car itself to blame? Deciding now who accepts responsibility for AI's future mistakes is crucial to its success.
Planning ahead
So it is clear that AI is throwing up new considerations and philosophical dilemmas that go to the heart of our values system. As communicators we should not wait passively while these difficult dilemmas play out. We are in a prime position to help facilitate discussions in our own organisations and in the wider community by asking the most pressing questions - and the most challenging ones, too.
Here are five considerations facing every communicator in the AI era:
1 The impact on industry
While AI can enhance an organisation's reputation for innovation, it does not come without risk, as a recent crash involving a Ford-backed self-driving car that sent two people to hospital showed. In any crisis there is usually a villain and victim. AI blurs the line between these two positions. We need to think now about how our existing crisis frameworks may need to be evolved to accommodate new scenarios and ambiguities.
2 AI & business relationships
Given the public's concerns around AI in respects to privacy and accountability, how can companies demonstrate that they are not just listening– but acting? One response has been the formation of the Partnership on AI to benefit people and society
This multi-stakeholder group has brought together industry giants like Facebook and Google with consultancies and civil liberties groups to investigate the ethical implications of AI. In addition to convening its own inaugural event in Berlin, the group has advocated for AI at major gatherings like the World Economic Forum and the OECD Forum. By convening an honest and inclusive conversation about AI, this industry-led coalition is helping to pave the way for AI’s future development. As communicators we should ensure we are part of these discussions and are also helping to facilitate them within our own spheres of influence. The more education there is within the communications industry, the better prepared we are to help navigate the future for the myriad of stakeholders who will be impacted.
3 Upskill & train
McKinsey data shows that most of us will have some of our work automated in the future, but AI can complement our roles and work, as long as we make sure we have the skills and expertise that we need to thrive in the next stages of our careers. As communications professionals we should examine the impact of AI on the work we do and how we do that work. We also should think about the potential impact of AI on the companies and brands we work for and what communications advice and support they will need to manage these impacts with their stakeholders be they employees, regulators, customers, suppliers… and new competitors. There is likely to be significant demand for communicators who can help smooth the path to change; for example, by communicating the impact of AI to employees, enabling them to proactively acquire the necessary skills and training they will need to secure the roles of the future.
4 Think critically, practice creativity
Critical and creative thinking skills are already important in our industry. Soon they will be our hallmark. While AI may assume some of our responsibilities, our most valuable abilities as communicators and advisors remain beyond the grasp of technology– at least for now!
5 Accept Change
AI is already here. Accepting the change it brings to our lives and careers will help us to provide the strategic counsel and support that is needed now to ensure the right people are having the right discussions about AI. By acknowledging both its rewards and the risks, we can help lead these discussions on how we want AI to impact and integrate into our lives. While it is not our role to judge AI, it is our responsibility to ensure that those who do have the facts and frameworks to make informed decisions.
This article was first featured in a recent PublicAffairsAsia special feature on AI.
https://publicaffairsasia.com/aicomms/
Creative Public Relations Strategy
7 年Very helpful piece, Rachel. Thank you.
Head of Communications at London Borough of Richmond upon Thames
7 年Keran McKenzie
Senior Public Relations Executive | Founder of Eight PR | Hong Kong | Tech PR | Law PR | Financial Services PR | Logistics PR | Global Connections | C Suite Networker
7 年Great article.