Scale AI announces multimillion dollar defense deal

Scale AI announces multimillion dollar defense deal

Scale AI on Wednesday announced a landmark deal with the Department of Defense that could be a turning point in the controversial use of Artificial Intelligence tools in the military.

The AI giant, which provides training data to key artificial intelligence players including OpenAI, Google, Microsoft and Meta, has been awarded a prototype contract from the Defense Department for “Thunderforge” the DOD’s “flagship program” to use AI agents for U.S. military planning and operations, according to releases.

It’s a multimillion dollar deal, according to a source familiar with the situation.

Spearheaded by the Defense Innovation Unit, the program will incorporate a team of “global technology partners,” including Anduril and Microsoft, to develop and deploy AI agents. Uses will include modeling and simulation, decision making support, proposed courses of action and even automated workflows. The program’s rollout will begin with U.S. Indo-Pacific Command and U.S. European Command and will then be scaled to other areas.

“Thunderforge marks a decisive shift toward AI-powered, data-driven warfare, ensuring that U.S. forces can anticipate and respond to threats with?speed?and precision,” according to a release from the DIU, which also said that the program will “accelerate decision making” and spearhead “AI-powered wargaming.”

“Our AI solutions will transform today’s military operating process and modernize American defense. ... DIU’s enhanced speed will provide our nation’s military leaders with the greatest technological advantage,” CEO Alexandr Wang said in a statement.

Both Scale and the DIU emphasized speed and how AI will help military units make much faster decisions. The DIU mentioned the need for speed (or synonyms) eight times in its release.

Doug Beck, DIU director, emphasized “machine speed” in a statement, while Bryce Goodman, DIU Thunderforge Program Lead and contractor, said there’s currently a “fundamental mismatch between the speed of modern warfare and our ability to respond.”

AI military partnerships

Scale’s announcement is part of a broader trend of AI companies not only walking back bans on military use of their products, but also entering into partnerships with defense industry giants and the Defense Department.

In November, Anthropic, the?Amazon backed AI startup founded by ex-OpenAI research executives, and defense contractor?Palantir?announced a partnership with Amazon Web Services to “provide U.S. intelligence and defense agencies access to Anthropic’s Claude 3 and 3.5 family of models on AWS.” This fall, Palantir signed a new five year, up to $100 million contract to expand U.S. military access to its Maven AI warfare program.

In December, OpenAI and Anduril announced a partnership allowing the defense tech company to deploy advanced AI systems for “national security missions.”

The OpenAI-Anduril partnership focuses on “improving the nation’s counter unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real time,” according to a release at the time, which added that the deal will help reduce the burden on human operators.

“The problem is that you don’t have control over how the technology is actually used —?if not in the current usage, then certainly in the longer term once you already have shared the technology,” Margaret Mitchell, researcher and chief ethics scientist at Hugging Face, said in an interview. “So I’m a little bit curious about how companies are actually realizing that – do they have people who have security clearance who are literally examining the usage and verifying that it’s within the constraints of no direct harm?”

Hugging Face, an AI startup and OpenAI competitor, has turned down military contracts before, including contracts that didn’t include the potential for direct harm, according to Mitchell. She said the team “understood how it was one step away from direct harm,” adding that “even things that are seemingly innocuous, it’s very clear that this is one piece in a pipeline of surveillance.”

Mitchell said that even summarizing social media posts could be seen as one step away from being directly harmful, since those summaries could be used to potentially identify and take out enemy combatants.

“If it’s one step away from harm and helping propagate harm, is that actually better?” Mitchell said. “I feel like it’s a somewhat arbitrary line in the sand, and that works well for company PR and maybe employee morale without actually being a better ethical situation... You can tell the Department of Defense, ‘We’ll give you this technology, please don’t use this to harm people in any way,’ and they can say, ‘We have ethical values as well and so we will align with our ethical values,’ but they can’t guarantee that it’s not used for harm, and you as a company don’t have visibility into it being used for harm.”

Mitchell called it “a game of words that provides some kind of veneer of acceptability… or non-violence.”

Tech’s military pivot

Google in February removed a pledge to abstain from using AI for potentially harmful applications, such as weapons and surveillance, according to the company’s updated “AI Principles.” It was a change from the prior version, in which Google said it would not pursue “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” and “technologies that gather or use information for surveillance violating internationally accepted norms.”

In January 2024, Microsoft-backed OpenAI quietly removed a ban on the military use of ChatGPT?and its other AI tools, just as it had begun to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools.

Until then, OpenAI’s policies page?specified?that the company did not allow the usage of its models for “activity that has high risk of physical harm” such as weapons development or military and warfare. But in the updated language, OpenAI?removed?the specific reference to the military, although its policy still states that users should not “use our service to harm yourself or others,” including to “develop or use weapons.”

News of the military partnerships and mission statement changes follows years of controversy about tech companies developing technology for military use, highlighted by the public concerns of tech workers — especially those working on AI.

Employees at virtually every tech giant involved with military contracts have voiced concerns after thousands of Google employees protested the company’s involvement with the Pentagon’s Project Maven, which would use Google AI to analyze drone surveillance footage.

Palantir would later take over the contract.

Microsoft employees protested a?$480 million?army contract that would provide soldiers with augmented-reality headsets, and more than 1,500 Amazon and Google workers?signed a letter?protesting a joint $1.2 billion, multiyear contract with the Israeli government and military, under which the tech giants would provide cloud computing services, AI tools and data centers.

“There are always pendulum swings with these kinds of things,” Mitchell said. “We’re in a swing now where employees have less say within technology companies than they did a few years ago, and so it’s kind of like a buyer and seller market… The interests of the company are now much heavier than the interests of the individual employees.”


要查看或添加评论,请登录

Anish Nanda的更多文章