AI Rivals Unite: OpenAI & Google DeepMind Employees Back Anthropic In Pentagon Lawsuit

· Free Press Journal

More than 30 OpenAI and Google DeepMind employees filed a statement supporting Anthropic's lawsuit against the US Defense Department after the federal agency labeled the AI firm a supply-chain risk, multiple reports state. The brief landed on the court docket just hours after Anthropic launched its legal challenge. Among the signatories is Jeff Dean, Google's chief scientist and leader of the company's Gemini AI program - a sign that the dispute extends beyond corporate rivalry and into broader concerns about US AI governance.

Why Anthropic was blacklisted

Visit asg-reflektory.pl for more information.

The sanction, which severely limits Anthropic's ability to work with military contractors, went into effect after Anthropic's negotiations with the Pentagon fell apart. At the heart of the dispute - Anthropic refused to allow its technology to be used for mass domestic surveillance or the development of autonomous lethal weapons. The Pentagon argued it should be able to use AI for any "lawful" purpose without being constrained by a private contractor.

What the brief argues

The filing pulls no punches. The amicus brief says that the Pentagon's decision "introduces an unpredictability in their industry that undermines American innovation and competitiveness" and "chills professional debate on the benefits and risks of frontier AI systems." It also contends the government had simpler options available - if it disagreed with Anthropic's terms, it could have simply cancelled the contract rather than branding a domestic AI company a national security threat.

An awkward moment for OpenAI

As Anthropic's relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military - a decision some criticised as opportunistic. Even so, OpenAI CEO Sam Altman publicly called the supply-chain risk designation "very bad for our industry and our country," urging the DoD to reverse course.

The case is being closely watched as a potential landmark moment for how Washington regulates - and weaponises - its relationships with domestic AI developers.

Read full story at source