The Moral Compass of AI: Gary Marcus's Guide to Steering Technology Towards Human Values
AI for Good
The Leading UN Platform on AI – Global Summit 2025: July 8-11 | Geneva, Switzerland
In a compelling address at AI for Good Innovate for Impact in Shanghai , Gary Marcus , a renowned cognitive scientist, best-selling author, and serial entrepreneur, laid out a comprehensive vision for ensuring that artificial intelligence (AI) serves humanity's best interests. His insights are not just timely but crucial as AI continues to integrate various aspects of society.
Marcus opened his talk by underscoring the critical need for ethical considerations in AI development.
“I think we should start with an AI that's consistent with human rights and human dignity,” he said.
He pointed to important guidelines such as UNESCO’s global standard on the ethics of AI and the U.S. White House’s blueprint for AI as benchmarks for development. According to Marcus, the current state of AI, particularly generative AI, is "technically and morally inadequate" highlighting the urgent need for systems that truly align with ethical standards.
The Flaws of Generative AI
Marcus provided a candid critique of generative AI, describing it as “rough draft AI” due to its tendency to produce plausible yet incorrect information. He used striking examples to underline his point, noting absurdities such as an AI's claim that one kilogram of bricks weighs the same as two kilograms of feathers, followed by fluent but faulty reasoning. This tendency to generate fabrications, he argued, vividly illustrates the current limitations and inaccuracies inherent in generative AI models.
Further delving into the ethical landscape, Marcus stressed the significance of addressing bias and plagiarism, problems that AI can inadvertently exacerbate. Despite increased awareness and attempts to mitigate these issues, they remain pressing concerns as AI systems often reconstruct language based on their training data, which may include copyrighted material. This practice raises substantial legal and ethical questions about the originality and legitimacy of AI-generated content.
In his discourse, Marcus labeled the current state of AI as "unstable AI," a term that captures its detachment from reality and the multitude of risks it poses. His comments underscored the urgent need for more stringent oversight and the establishment of robust ethical standards in the development of AI technologies, advocating for a proactive approach to governance that keeps pace with the rapid advancements in the field.
The Need for Robust Regulation
Marcus discussed the challenges of managing risks associated with artificial intelligence, emphasizing the need for realistic expectations and comprehensive legal frameworks.
“It's not realistic to expect a silver bullet, a single solution to all of the risks from AI or to expect that existing laws will cover everything that needs to be covered,” Marcus explained.
Advocating for stronger regulatory frameworks, Marcus emphasized the necessity of agility, transparency and accountability in AI development.
“We need full accounting of what data is used to train models, full accounting of all AI-related incidents as they affect bias, cybercrime, election interference, market manipulation, and so forth,” he stated.
He proposed a model similar to the U.S. FDA’s approach to drug approvals, suggesting the establishment of an agency to evaluate large-scale AI deployments and determine if their benefits outweigh the risks.?
Furthermore, Marcus stressed the importance of post-release auditing by independent third parties to ensure that AI systems adhere to ethical standards and are not used for harmful purposes. “We need liability laws, we need layered oversight,” Marcus added, drawing parallels to the aviation industry, where multiple layers of regulation contribute to safety.
Long-term Risks and Immediate Concerns
During his address to the U.S. Senate, Marcus shared that many senators from both parties supported his ideas. However, he raised concerns about potential obstacles to implementing these regulations, particularly due to financial interests and the influence of large tech companies. He emphasized that the priorities of technologists might not always align with the broader interests of humanity.
"We shouldn't be letting the big tech companies decide everything for humanity," he warned.
Marcus also elaborated on the dangers of regulatory capture and the misleading nature of industry hype, cautioning against the overpromising claims of tech leaders.
“Artificial general intelligence is not imminent; don’t let them fool you. We need to address the real-world problems of AI,” he cautioned.
This kind of hype risks misleading both the public and policymakers, potentially resulting in ill-informed decisions and the misallocation of resources which emphasizes the need for a clear and realistic understanding of AI’s current capabilities.
A Balanced Approach to AI Development
Marcus expressed his conviction that a more refined version of AI is achievable, an AI that is not only technically proficient but also morally sound.
领英推荐
“I do think a better AI is possible, one that is technically and morally adequate, one that's consistent with human rights and human dignity,” he stated.
He emphasized the need to take cues from the human mind, which combines multiple cognitive systems to function effectively.
Ultimately, Marcus called for a balanced approach that combines the strengths of different AI paradigms. He criticized the intellectual animosity between neural network proponents and symbolic AI advocates, suggesting that a hybrid approach could yield the best results.
“If we can figure out how to build an AI that combines the best of both worlds, we can create an AI that is learnable, data-efficient, interpretable, reliable, verifiable, and grounded in facts,” he concluded.
Drawing a parallel with the global urgency to tackle climate change, Marcus highlighted a limited window for effective action in AI governance.? “There’s a limited time window for action. I don’t think that governments will do this unless people tell them how important it is to them,” he emphasized. Marcus urged the public to recognize the critical importance of getting AI development right. The decisions made today, he warned, will have repercussions for decades, even centuries.
Through his insights, Marcus provided a sober yet hopeful roadmap for the future of AI. By advocating for ethical standards, transparency, and robust regulatory frameworks, he underscores the importance of steering AI development in a direction that truly benefits humanity. As the AI landscape continues to evolve, his call to action is a crucial reminder of the responsibilities that come with technological advancement.
Watch the full speech here:
Have you read?
Our Sponsors
Enjoyed this newsletter?
Subscribe here to our newsletter AI for Good Insider to receive the latest AI insights.
This edition of AI for Good Insider was written and curated by AI for Good Junior Communications and Social Media Officer Celia Pizzuto .
#AI he/him #DataScientist
4 个月#AGI
Data and BI Architect at Fortune 500 Manufacturer
4 个月I’m not sure how we can expect a technology that is literally based on theft of intellectual property to be “developed ethically and morally”.
Good orderly direction network
4 个月I’m looking for a person to work with that can put together an organization utilizing AI to create a new way of operating in our world. If anyone out there is intrigued contact me.
Filmmaker / Futurist / Beneficial AGI Researcher / Mindful Optimist
4 个月All available fingers crossed ?? whatever the #future has in store for us, it is a #future we have created for the #benefit of all sentient beings. I choose to believe that in the near #future, (we) #humanity have achieved #maturity and become an #united, #collaborative, #cooperative #society, working together for the prosperity of all. If we do it right, this #possibility is within our reach. Who knows, our #potential future is potentially #Limitless. One way to ensure this #Limitless #future is to develop a #Beneficial #AGI/#ASI. While I am tackling this complex topic, I am trying to keep myself humble, by attempting to unfold the many layers of the following thought-provokingly self-reflective question: "Are we, as a species, mature enough, to successfully/responsibly develop a Benevolent-Beneficial #ArtificialGeneralIntelligence (#AGI)?" Somehow this fascinating question became the glue/attractor for even more past, present, future; questions, suggestions, probabilities and possiblities of our past, present and future relationship with #AI. Let's explore this excitingly complex topic together. => "The #Existential #Importance for The #Development of #Benevolent-#Beneficial #AGI" here: =>> https://lnkd.in/gFyK2b8A