Trump Wants to Destroy Anthropic Because It Is Doing Congress’ Job of Preventing Him from Abusing AI
America will lose its competitive edge on this technology without legislation outlawing unconstitutional uses
Over the last two weeks, the Department of Defense has initiated two wars: one against a nation with a long history of conflict with the United States; the other against one of the fastest-growing new companies in American history: Anthropic, the frontier artificial intelligence lab behind the popular Claude model. The DOD has effectively declared both Iran and Anthropic to be enemies of America, and though the weapons the DOD is using in each conflict differ dramatically—explosive missiles versus bureaucratic legal statutes—the department has made clear in both cases that its objective is to severely damage, if not totally destroy, the enemy. These two conflicts are intertwined and began on the same day: the very afternoon that the DOD blacklisted Anthropic, U.S. strikes on Iran commenced—with Claude reportedly helping to analyze intelligence and plan operations.
At the heart of this conflict are two clashing philosophies about AI’s potential dangers. Anthropic, which was founded by former OpenAI employees who wanted to deploy AI systems with a greater emphasis on safety and responsibility, believes those dangers are real and should constrain how AI is developed and deployed. The Trump administration, by contrast, believes that safety restrictions are obstacles to American dominance—as its 2025 AI Action Plan made clear—and has made its “Department of War” (renamed from “Department of Defense”) a primary vehicle for achieving that dominance.
These cultures came into conflict when the DOD attempted to renegotiate its contract with Anthropic to remove any previous government usage restrictions and allow for “all lawful purposes.” Anthropic agreed to drop all but two restrictions, the use of its AI models for mass domestic surveillance and to control fully autonomous weapons.
What makes this particularly striking is that Anthropic has been among the most forward-leaning AI companies to work with the U.S. government. It partnered with Palantir—a software company and defense contractor whose data collection and surveillance capabilities have drawn sustained criticism from privacy advocates and human rights organizations—and was the first frontier AI model on classified networks, making Claude an integral part of the DOD’s efforts from Venezuela to Iran.
But despite being the frontier AI company most integrated into the systems of the Department of Defense, Anthropic had some limits. When it refused to accede to all of the Trump administration’s demands, it was branded “woke AI” and was subjected to the full wrath of the DOD and the federal government.
The clash between the DOD and Anthropic can be confusing, since it encompasses legitimate disagreements over AI and defense policy—but those are overshadowed by the Trump administration’s likely illegal approach to attacking what the president calls a “Radical Left AI company” that dares to take even a minimal stand for AI safety. Why this fight started and how the government has chosen to wage it are two separate questions, and conflating them is one of the major mistakes most coverage of this conflict has made.
How the Conflict Began
Anthropic has contracts with the DOD and the broader U.S. government. As of the start of the DOD’s war against it, Claude was the only frontier AI model cleared for deployment on classified networks—a distinction it held from late 2024 until the Pentagon’s near-simultaneous deals with OpenAI and xAI in the wake of the blacklisting. Those contracts included restrictions on the use of Anthropic’s AI that the DOD wanted to renegotiate and remove. It agreed to all of them but stuck to its guns on barring the use of its models for mass domestic surveillance and fully autonomous weapons systems. Anthropic’s contracts and the proposed modifications to those contracts are not public, though they are referenced in the lawsuits.
In late February, Secretary of Defense Pete Hegseth and Anthropic CEO Dario Amodei met at the Pentagon, where the DOD reportedly issued an ultimatum: agree to its new terms or face one of two consequences. Either Anthropic would be designated a “supply chain risk” and removed from DOD networks and those of all its contractors, on the grounds that it posed a national security liability—or the Defense Production Act would be invoked to compel Anthropic to continue working with the DOD, on the grounds that not using it posed a national security risk. These are two inherently contradictory positions.
On February 27, President Donald Trump used Truth Social to order the federal government to “IMMEDIATELY CEASE all use of Anthropic’s technology.” Later that same afternoon, Hegseth posted on X that he was designating Anthropic a “supply chain risk.” He also included an unprecedented and probably illegal statement requiring all DOD contractors and suppliers to refrain from conducting “any commercial activity with Anthropic”—a move which, if upheld by the courts, would prevent cloud service providers like Amazon Web Services (AWS), also a DOD contractor and supplier, from hosting it, potentially destroying the company. Hegseth simultaneously announced that the DOD would continue using Anthropic for up to six months, and within hours the United States began strikes against Iran, with Claude reportedly integral to the war effort.
Despite Hegseth’s post, Anthropic was not formally notified of the DOD’s new legal designation until March 3, and the designation turned out to apply more narrowly than Hegseth’s post had suggested—covering DOD procurement contracts rather than all commercial activity with Anthropic. On March 9, Anthropic filed suit against the DOD and the U.S. government; a court hearing is set for March 24 in the District Court for the Northern District of California.
In all the back and forth, many have conflated the Trump administration’s political retaliation against Anthropic with the legitimate policy debates that triggered it. The straightforward legal choice would have been for the administration to cancel its contract with Anthropic and use other, less destructive methods to address its issues. Instead, it attempted to claim government contractors couldn’t “conduct any commercial activity with Anthropic,” which was almost certainly illegal and ultimately not even reflected in its official notice.
Even if the DOD loses in court, it has sent a clear message to American AI companies. Private companies should be able to choose to work with the government because they trust it to follow the law. By punishing a company for not agreeing to its demands, the federal government signals to the entire AI industry that cooperation with it comes with existential risk—undermining the voluntary partnerships that American AI leadership depends on.
The Attack on Anthropic
The political choices the DOD made are straight out of the second Trump administration’s standard playbook. It used the full threat of government power to attempt to coerce a private entity into total submission. It weaponized unrelated statutes and took probably illegal action just to try to hurt a perceived enemy. It has been calling Anthropic’s customers to intimidate them. It is reportedly preparing an executive order—that might not have any basis in law—to continue the fight.
Almost all of these claims are laid out with detailed evidence in Anthropic’s dual lawsuits against the DOD and the federal government. Experts believe Anthropic has a strong case and will likely succeed in court. And if it prevails, will the Trump administration treat the defeat like it has treated legal defeats against Big Law firms and move on? Or will it push the battle into new territory by attempting to conduct a long-term, multi-front assault on Anthropic, as it has against Harvard University?
Unfortunately, as too many institutions have learned, the federal government has many levers it could pull to harass Anthropic—everything from revoking employee visas, to DOJ investigations, to having the Securities and Exchange Commission interfere with Anthropic’s potential initial public offering.
The Trump administration’s playbook of authoritarian and expansive executive actions against Anthropic is overshadowing the legitimate policy debates we should be having about the ethical use of artificial intelligence.
As the U.S. continues airstrikes on Iran—a campaign that the DOD’s use of Claude has helped plan—it’s important to realize that the concept of a war fought with, or even by, AI is no longer just the plot of a technothriller novel. It’s our reality. That reality carries with it complicated questions we have barely begun to grapple with: whether private companies can contractually constrain the government’s use of technology it has procured, or whether the Trump administration’s position that the DOD shouldn’t be constrained from “all lawful purposes” is, whatever one thinks of it, a legitimate and defensible position.
Ultimately, Anthropic was unable to arrive at contract language it was comfortable with and was punished for holding that position. On the same day, and immediately after the retaliation against Anthropic, OpenAI felt comfortable agreeing to a contract with the DOD, stating it believed its contracting language and technical measures have “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” Tremendous public backlash against OpenAI followed its decision.
But regardless of the contract language, using government contracting terms to restrict the government’s actions is at best a flawed approach, as others have noted. It is, however, the primary one available in the absence of laws directly limiting what the government can do in these areas.
The CEOs of the leading U.S. frontier AI labs, Anthropic and OpenAI, both claim to hold the same red lines on AI being used for mass surveillance and fully autonomous weapons systems. Both companies have also expressed deep unease with private entities attempting to restrict the government’s use of these tools, especially in the defense context. Both have called on the democratic governance process—in this case Congress—to pass laws establishing clear legal limits on the government’s use of AI for surveillance and autonomous weapons.
What Congress Should Do
Congress has a meaningful opportunity to pass much-needed laws on mass domestic surveillance and fully autonomous weapons systems. Two opportunities are imminent: the reauthorization of Section 702 of the Foreign Intelligence Surveillance Act, which lapses on April 20, and this year’s National Defense Authorization Act. Already, some members of Congress are recognizing the importance of this issue and preparing new legislation. Both opportunities deserve serious, bipartisan examination—the stakes, as this conflict has made clear, could hardly be higher.
Too many are trying to interpret the DOD’s actions against Anthropic through the lens of a reasonable policy debate, when what really happened was a politically calculated decision by the Trump administration to attempt to destroy one of the fastest-growing new companies in American history in its quest for dominance. We must not ignore the consequences of the Trump administration’s legally indefensible actions against Anthropic—a familiar pattern of bringing the full force of the state down on perceived enemies.
But we should also ask Congress to do its job and guard against the most serious abuses of AI by the government. The fight between the DOD and Anthropic should ultimately be remembered not for how it resolves, but for what it forced Congress and the American public to finally confront: that the age of AI-powered warfare is here, and we have no adequate laws to govern it.
© The UnPopulist, 2026
Follow us on Bluesky, Threads, YouTube, TikTok, Facebook, Instagram, and X.
We welcome your reactions and replies. Please adhere to our comments policy.









