How to Build Trust, and Limit the Spread of Misinformation by LLMs
Alp Arhan U.
Product and Solutions Engineering | Future of Work, Artificial Intelligence (AI), Intelligent Process Automation
Grow Your Perspective Weekly: Misinformation and information synthesis with Generative AI???
? Reading Time:?5min 24sec?
Trustworthiness in AI systems
?? Why is this important?
Misinformation has, historically, spread through word-of-mouth.
However, with the rise of Language Models like LLMs (Large Language Models), the potential scale and speed of its dissemination have reached unprecedented levels. As these models become intertwined in our daily lives – offering suggestions, automating tasks, or even influencing decisions – it's crucial to address their inadvertent role in spreading false information. This newsletter delves deep into this complex issue's technical, business, and societal facets:
Questioning the Fabric of Reality
AI-generated content can spread virally on platforms like TikTok, Instagram, or broader news channels, discerning the integrity of information. Generative AI’s capacity to mimic and create content with alarming proficiency blurs the lines between fact and fabrication, challenging the fabric of societal consensus on truth, especially on argumentative or perceptive topics. As the giant tech companies implement resources to track content generated on their platforms, OpenSource platforms do not have that capability. Similar to downloading music from LimeWire and avoiding paying iTunes at the time, many users can do the same by simply integrating open-source tools.
Hear this out - while the enterprise use cases such as automating workflows with LLMs are very powerful, at scale, if everyone adopts similar capabilities, the defensibility of most enterprises also decreases. As AI democratizes automation globally, the competition will be fierce, and therefore, as always, the industry will consolidate at the expense of startups.
Who controls that reality?
Last week, two giants of AI (Yann LeCun and Andrew Ng ) have stated that major tech companies are spreading fear of AI to gain control over the industry. Andrew: "There are large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction". "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community." Yann: "Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry. You, Geoff, and Yoshua are giving ammunition to those lobbying for a ban on open AI R&D. If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI. Most of our academic colleagues are massively in favor of open AI R&D. Very few believe in the doomsday scenarios you have promoted. You, Yoshua, Geoff, and Stuart are the singular-but-vocal exceptions."
“These companies already control the largest cluster of AI processors, the best models, the most advanced quantum computing and the overwhelming majority of robotics capacity and IP.”
?Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma
?? A recently published paper covers a trustworthy general AI that should possess various capabilities to ensure its reliability and effectiveness, specifically “trustworthiness.” These include:
?? Societal Impacts:
领英推荐
???Now that is said, here is what is new in the world of AI and automation:
Key Insights from "Biden lays down the law on AI"
Comprehensive AI Governance: President Biden’s executive order introduces a robust framework for AI development and usage, emphasizing safety, privacy, and ethical considerations. This much-anticipated move comes as a response to the rapid proliferation of generative AI technologies and their associated risks.
Addressing Bias and Privacy: The order notably aims to tackle issues of bias in AI systems, such as those seen in automated hiring tools, and puts forth measures to protect Americans’ privacy rights. It also requires leading genAI developers to be transparent with the government regarding safety test results.
National and Global AI Standards: NIST is tasked with establishing standards for safe AI, reinforcing the government's commitment to mitigating risks. Furthermore, this initiative aligns with global efforts, as G7 nations adopt a set of AI safety principles, signaling an international movement towards standardized AI governance.
AI in National Security and Commerce: The order outlines roles for the National Security Council and the Department of Commerce, including the development of AI for cybersecurity and content authentication. It sets a precedent for authenticating government communications and influences the private sector's approach to AI transparency.
Concerns and Critiques: While the executive order is a significant step forward, experts like Avivah Litan of Gartner Research and Adnan Masood of UST point out its limitations, particularly around definitions, scope, and enforcement. They highlight the need for detailed implementation and compliance mechanisms.
Bioengineering and AI: A critical aspect of the order is the establishment of standards to prevent AI from being used to create harmful biological agents. This measure aims to safeguard against potential biotechnological threats to public health.
Market Expectations and Industry Impact: The order is set to shape market expectations for AI, with a focus on responsible development through testing and transparency. It could influence small businesses and entrepreneurs in the AI space by providing technical assistance and resources.
Immigration Policies for AI Talent: In a move to strengthen the AI workforce, the order includes provisions to streamline immigration for highly skilled individuals with expertise in critical areas, facilitating their ability to contribute to US innovation in AI.
Government Leadership in AI Training: The US government aims to set an example by hiring AI governance professionals and providing AI training across agencies, ensuring that AI safety measures are developed with a deep understanding of the technology.
Bipartisan Potential: The executive order’s focus on AI regulation may provide a rare opportunity for bipartisan cooperation, positioning the US for leadership on a critical topic for the current century.
Up next in our series: The AI-powered revolution in Synthetic Biology. Explore its transformative impact, from food production to advancing longevity.
?
Stay Curious. Stay Informed. Join us every week as we delve deeper into the challenges and triumphs of automation in the modern age.
?