"2024 will be the last human election" they warned.
At an exclusive event dubbed "The A.I. Dilemma," held nearly a year ago at the National Press Club in Washington, D.C., the air was dense with severe concerns and expectant discussion, the world was still in shock after the release of ChatGPT, Tristan Harris, a technology ethicist, and a pseudo-tech-lobbyist made a poignant statement. Joining him was Aza Raskin, recognized for pioneering one of our addictions to modern user interfaces — infinite scrolling — to underscore the pervasive influence of design on online behavior. Harris's words, weighty with implication, resonated through the hushed auditorium: "2024 will be the last human election."
The 2024 United States presidential election is scheduled for Tuesday, November 5, 2024, 257 days from today. Our nation will not be the same.
This assertion may initially appear alarmist or out of a sci-fi plot. Still, it anchors deeply in the emerging realities of artificial intelligence (AI), particularly large language models (LLMs) like OpenAI's GPT-4 and their accelerating integration into societal frameworks. For corporate communicators and organizations on the brink of implementing such generative AI solutions, this statement must serve as more than a cautionary pause; it must be the starting point of a critical discourse on the role such technology will play in our socio-political fabric.
The "last human election" Harris refers to encapsulates the potential that AI, if left unchecked, may influence and shape human decisions and public opinion to an unprecedented extent, possibly affecting democratic processes. It will make Cambridge Analytica look pedestrian. This influence, combined with the manipulative finesse of algorithms like those behind infinite scrolling, underscores the capacity of AI to hold sway over the human psyche.
"They will be able to manipulate people right and these will be very good at convincing people because they'll have learned from all the novels that were ever written, all the books by machiavelli, all the political connives. They'll know all that stuff, they'll know how to do it." - Geoffrey Hinton
As corporate communicators responsible for the narratives that shape public perception, it's crucial to understand the implications of integrating LLMs into communication strategies. The capabilities of LLMs extend beyond simple chatbots and automated customer service agents; they are now at the forefront of content creation, producing articles and reports and even simulating human conversation with startling coherence and context awareness.
OpenAI and Microsoft have recently taken action against malicious use of AI by state-affiliated actors. They have identified and disrupted several nation-state actors who were attempting to use AI services for harmful cyber activities. https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors
As organizations move to leverage the power of LLMs, they must tread a path lined with ethical considerations and societal responsibilities. Corporate communicators are the vanguard in defining how these models are orchestrated within their strategies while maintaining a commitment to authenticity and truth.
Firstly, adopting these systems stands to redefine the nature of corporate reporting and content creation. With the ability to generate vast amounts of text and simulate dialogue, businesses can access unprecedented efficiency in their communications. However, as we begin to automate the construction of narratives, we must ask: At what cost does this efficiency come? Will the essence and integrity of corporate messaging hold its ground against the propensity for AI-generated content to dilute genuineness in pursuit of maximum engagement and impact?
领英推荐
Secondly, there's the challenge of transparency. The veil behind which these algorithms operate must be lifted for accountability. Communicators should disclose the use of LLM technologies and actively engage in conversations about their implications, fostering an environment of trust and open dialogue with their audiences.
With great power comes the potential for misuse. The possibility of spreading misinformation or 'deepfakes' — sophisticated forgeries powered by AI — looms as a substantial threat to organizational integrity and societal trust. For organizations instilling LLMs into their communication processes, robust guardrails must be erected to prevent the spread of falsehoods and to preserve the sanctity of facts.
The implementation of LLMs also poses a question of displacement. As AI becomes capable of fulfilling roles traditionally held by humans, organizations must deliberate on the impact of this shift on employment and public sentiment. Navigating this transition with empathy and foresight will be essential to uphold social responsibility and safeguard organizational reputation.
Practical measures to address these issues may involve the establishment of ethical guidelines specific to using AI in communications. This includes clear policies on content verification, the use of humans in the loop to oversee AI-generated content, and the development of AI literacy programs to educate stakeholders about the strengths and limitations of LLMs.
Crucially, attention must be paid to the model providers' red-teaming capabilities or datasets on which these models train. These have implications for the quality of outputs but also have the potential for infiltration of ideologies not aligned with the organization's morals and ethics. They must be free from biases and representational of diverse viewpoints to prevent perpetuating stereotypes or inadvertently endorsing particular ideologies. In addition, providing options for users to question sources and AI contributions enhances transparency and user agency.
Corporate communicators' role expands beyond the curation of narratives; they must now evolve into stewards of AI integration, committed to unraveling the enormous implications of their tools on democratic discourse and human choice.
As we edge closer to critical elections and pivotal societal events where the likelihood of AI's influence is undeniable, corporate communicators must echo the urgency expressed by tech pundits like Harris and Raskin. The call to prioritize human values within the digital domain has never been more critical. Establishing an AI ethos that aligns with humanity's collective welfare and democratic principles is no longer optional; it is an imperative that will shape the fabric of our future decision-making.
The potential foresight of "2024 will be the last human election" serves as a clarion call for corporate communicators. As custodians of the corporate conscience and narrators of business truth, they hold the power to harness AI advancements for good. Through conscientious deployment, stringent ethical oversight, and unwavering commitment to the human element amidst digital transformation, corporate communicators can ensure that AI is an extension of human intellect and not a replacement — keeping the essence of 'human elections' alive and authentic.