A Threat to Democracy? ??

A Threat to Democracy? ??

Last week, Musk’s xAI quietly rolled out the beta release of Grok-2 and Grok-2 mini , offering them exclusively to X premium subscribers. The real surprise? A largely uncensored AI image generator bundled with the models.

Within hours, X was flooded with AI-generated images that not only stretched creative limits but also sparked ethical debates. This new feature empowers users to create photorealistic images on politically sensitive or even fantastical themes—like depicting vice president Kamala Harris pregnant and paired with Donald Trump as a couple , or past presidents in compromising situations .?

The controversy is already heating up . Some speculate it might be a PR stunt to promote the beta release by temporarily removing guardrails to entice more subscribers to X and Grok. And others think it was done deliberately to throw light on the risk that advanced AI systems may possess.?

Whatever the case is, this approach is too risky and dangerous, especially with the upcoming US elections. It’s also unlikely to last long given the potential dangers it poses.?

This also echoes the Cambridge Analytica scandal , where Facebook data was exploited to influence US elections, raising the question—could the same be happening with today’s AI systems?

OpenAI plays it safe?

OpenAI recently uncovered and disrupted a covert Iranian influence operation , Storm-2035, that used ChatGPT to generate politically insensitive content, including commentary on the US presidential election.?

Although the operation failed to achieve significant audience engagement, OpenAI emphasised that preventing such abuse is crucial to safeguarding democracy, particularly with numerous elections on the horizon in 2024.?

“We take seriously any efforts to use our services in foreign influence operations,” OpenAI wrote, highlighting their ongoing commitment to transparency and collaboration with government and industry stakeholders to combat these threats.

This is not the first time. Previously, OpenAI stalled AI-driven attempts by entities from Israel, Russia, China, and Iran to influence India’s 2024 Lok Sabha elections.?

India reacts: Recently, former Union minister Rajeev Chandrasekhar underscored the critical need for democracies and their allies to shape the future of technology, responding to concerns that AI dominance by authoritarian regimes like China could lead to global surveillance and censorship.

Chandrasekhar’s remarks came in response to Gmail creator Paul Buchheit’s warning that China’s leadership in AI could result in a dystopian world with pervasive surveillance and restricted freedoms.

Can OpenAI save democracy??

Answer: It’s both a YES and a NO. But, kudos for trying.?

OpenAI’s chief technology officer Mira Murati said during a recent interview that the company gives the government early access to new AI models, and they have been in favour of more regulation.

“We’ve been advocating for more regulation on the frontier, which will have these amazing capabilities but also have a downside because of misuse. We’ve been very open with policymakers and working with regulators on that,” she said.

Notably, OpenAI has been withholding the release of its video generation model Sora, as well as the Voice Engine and voice mode features of GPT-4o. It is likely that OpenAI might also release GPT-5 post-elections.

Earlier this year, Murati confirmed that the elections were a major factor in the release of GPT-5. “We will not be releasing anything that we don’t feel confident on when it comes to how it might affect the global elections or other issues,” she said.

Meanwhile, OpenAI recently appointed retired US Army General Paul Nakasone to its board of directors. As a priority, General Nakasone joined the board’s Safety and Security Committee , which is responsible for making recommendations on critical safety and security decisions for all OpenAI projects and operations.

OpenAI has also been working closely with the US Department of Defense on open-source cybersecurity software — collaborating with the Defense Advanced Research Projects Agency (DARPA) for its AI Cyber Challenge announced last year.?

In April, OpenAI CEO Sam Altman, along with tech leaders from Google and Microsoft, joined a DHS panel on AI safety to advise on responsible AI use in critical sectors like telecommunications and utilities.

Altman has actively engaged with US lawmakers, including testifying before the Senate Judiciary Committee. He proposed a three-point plan for AI regulation, which includes establishing safety standards, requiring independent audits, and creating a federal agency to license high-capability AI models.

Anthropic to the Rescue?

Anthropic chief Dario Amodei believes that what distinguishes Anthropic from other AI companies is the “concept of Constitutional AI (CAI) ”.?

Anthropic’s CAI trains AI systems to align with human values and ethics, drawing on high-level principles from sources like the UN Declaration of Human Rights. In the near future, the company plans to provide custom constitutions for specific constituencies, or services that require specific information.

Amodei said that Anthropic wants to help the US government and its citizens by providing them with a tool to easily access information related to voting or healthcare services. “Anthropic, AWS and Accenture recently worked with the DC Department of Health to power a chatbot that allows residents to ask natural language questions about things like nutrition services, vaccinations, schedules, and other types of simple health information,” he said.

When discussing cloud security, he emphasised that AWS has a proven track record of providing government customers with world-class security solutions.?

“AI needs to empower democracy and allow it to be both better and remain competitive at all stages,” he said, adding that the government can use Claude to improve citizen services, enhance policymaking with data-driven insights, create realistic training scenarios, and streamline document review and preparation.

Interestingly, the founder of Anthropic has also always been in favour of regulating AI. “AI is a very powerful technology, and our democratic governments do need to step in and set some basic rules of the road. We’re getting to a point where the amount of concentration of power can be greater than that of national economies and national governments, and we don’t want that to happen,” he said in a recent podcast .

Considering the US elections are supposed to happen later this year, Anthropic has introduced an Acceptable Use Policy (AUP) that prohibits the use of their tools in political campaigning and lobbying.?

This means candidates are not allowed to use Claude to build chatbots that can pretend to be them, and the company doesn’t allow anyone to use Claude for targeted political campaigns. Anthropic has also been working with government bodies like the UK’s Artificial Intelligence Safety Institute (AISI) to conduct pre-deployment testing of their models.

Enjoy the full story here .?


Will Mojo Replace Python?

In a recently deleted interview , former Google CEO Eric Schmidt lauded Mojo , saying, “I never wanted Python to survive. Everything in AI is being done in Python. There’s a new language called Mojo that has just come out, which looks like they finally have addressed AI programming. But, we’ll see if that actually survives over the dominance of Python.”?

Learn more about Mojo here .???


AI Bytes?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了