Big tech: Trust us
Ross Monaghan
Former CEO now sharing his skills and knowledge as an educator, strategist, and trainer.
Microsoft has admitted in a blog post that “there are legitimate concerns about the power of (AI) technology to cause harm rather than benefits”, and tried to reassure readers that “governments around the world are looking at how existing laws and regulations can be applied to AI”.?
Given that OpenAI’s Sam Altman has said “we cannot predict exactly what will happen” and the glacial change to regulation relating to technology like social media during the past 15 years, Microsoft’s reassurances aren’t convincing.
Trusting your reputation to big tech might not be a good strategy.?
The failures of big tech, including Microsoft, have been well documented. “Since its launch in 2004, Facebook has been continuously embroiled in data privacy issues,” according to the World Economic Forum.?
New York University’s Meredith Whittaker points out that our increasing reliance on AI “cedes inordinate power over our lives and institutions to a handful of tech firms”.?
As corporate affairs managers we need to ask ourselves if we should place our corporate reputations in the hands of a few organisations with patchy track records. Will we trust the reassurances of the AI sector’s lawyers, publicists, lobbyists and CEOs, or will we manage the risks and challenges in a systematic way to protect our organisation’s reputation?
As professional communicators we’re aware of the power of publicity and lobbying campaigns backed with huge caches of cash aimed at securing more business. The cash involved is staggering. Microsoft, for example, invested $US10 billion in OpenAI.
Ask yourself, who is best placed to manage your corporate reputation, you and your organisation’s management, or someone else?
Or maybe you should ask ChatGPT?
Tech: Microsoft announces AI customer commitments
Microsoft has announced three AI Customer Commitments in an effort to help their customers on their “responsible AI journey”.?
The commitments are:?
As part of their commitment to sharing, Microsoft have released a range of documents to help organisations adopt appropriate AI practices, including an AI Impact Assessment Template and an AI Impact Assessment Guide. A white paper on Governing AI is also available.?
Politics: Clogger, a political campaign in a black box
Have you ever found yourself endlessly scrolling through Instagram, Facebook or TikTok videos?
领英推荐
The algorithms used by these organisations to keep us scrolling use reinforcement learning.This technique could be used by political campaigners to “induce voters to behave in specific ways”, according to two Harvard professors.
Writing in The Conversation, Professors Archon Fung and Lawrence Lessig say that AI using reinforcement learning could “dramatically increase the scale and potentially the effectiveness of behaviour manipulation and microtargeting techniques that political campaigns have used since the early 2000s”.
The pair imagine an AI service “in a box” called Clogger and suggest that it could use AI to generate personalised messages via text, email and social media. It could also use reinforcement learning “to generate a succession of messages that become increasingly more likely to change your vote”.?
Fung and Lessig conclude by suggesting “that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require over eager campaigners and consultants who have powerful new tools that can effectively push millions of people’s many buttons.”
Editorial: Big Tech’s AI future is not good news
The New York Times published an editorial on June 9 claiming that “Big tech is bad” and that “Big AI will be worse”.
“History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself”, according to Daron Acemoglu and Simon Johnson, co-authors of the editorial and the book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity.”
Framing: Uncovering AI’s serious novelty
Open-minded experimentation with AI, rather than rushing to conclusions, is a useful technique for preparing for the risks and opportunities of AI, according to a University of Cambridge research fellow.
Writing in TechPolicy, Sylvan Rackam, gives three suggestions about how to make AI easier to understand and discuss. The three approaches include:?
Rackam concludes by suggesting personal AI research should, “take a slightly more curious and playful approach to developing what may be one of the most serious and impactful technological developments in human history”.
In brief
Reputation Week provides general advice only and should not be used as a basis for making decisions about your particular circumstances.
Applied AI in Growth & GTM
1 年Great read, Ross! I actively use AI for various things but found this article thought provoking. I haven’t used it for client work yet. I think I’ll continue to hold off on that a little longer.