Why the US Treasury is dumping Anthropic AI for national security

Why the US Treasury is dumping Anthropic AI for national security

The honeymoon between the federal government and Silicon Valley's darling AI startups just hit a brick wall. On the orders of the Trump administration, the US Treasury is cutting ties with Anthropic. This isn't just a minor contract dispute or a software swap. It’s a loud, clear signal that the white-glove treatment for AI companies is over when it comes to the Department of the Treasury and national security.

If you’ve been following the rise of Claude, Anthropic’s flagship model, you know it was positioned as the "safe" alternative to OpenAI’s ChatGPT. It was supposed to be the responsible choice. But the current administration doesn't care about marketing slogans. They care about data sovereignty and where the digital breadcrumbs lead.

The national security directive that changed everything

The move stems from a broad executive directive focused on tightening the grip on dual-use technology. In the eyes of the current White House, AI isn't just a productivity tool for writing emails or summarizing spreadsheets. It’s a strategic asset that, if handled poorly, becomes a massive liability. The US Treasury handles some of the most sensitive financial data on the planet. We're talking about tax records, international sanctions lists, and market-moving economic data.

The administration’s logic is simple. If a private company has any vulnerabilities—whether through their supply chain, their investor list, or their cloud infrastructure—they don't belong in the Treasury’s tech stack. Anthropic has faced questions before about its complex web of global investors. While they’ve tried to maintain a "public benefit" image, the Treasury is now pivoting toward internal, high-security models that don't rely on third-party API calls that could potentially be intercepted or logged.

Why Anthropic lost the trust of the Treasury

It’s easy to think this is just politics. It isn't. There are technical and structural reasons why the US Treasury is walking away.

First, the "black box" problem. Even though Anthropic talks a big game about "Constitutional AI," the government can't actually see under the hood. When the Treasury uses an AI to help flag money laundering or analyze global trade patterns, they need to know exactly how a decision was reached. They can't just take a startup's word that the model is behaving.

💡 You might also like: The Gilded Tightrope of Gavin Newsom

Second, the cloud dependency is a nightmare for hawks in the administration. Most of these models run on massive server farms owned by big tech firms. The directive highlights a need for "on-premises" or air-gapped solutions. Anthropic’s current business model is heavily built on cloud access. That doesn't fly when you're trying to protect the integrity of the US dollar.

Third, there's the China factor. The administration is obsessed—rightly or wrongly—with any tie, however thin, to foreign adversaries. Any AI company with a global footprint and international backing is now under a microscope. If there’s even a 1% chance that data could be leaked through a back door, the Treasury is out.

What this means for the AI industry at large

The ripple effects will be felt across every board room in San Francisco. For years, these companies assumed the government would be their biggest, most reliable customer. They built "Government Editions" of their software and hired lobbyists to roam the halls of DC.

This pivot shows that "Big Government" is no longer interested in being a test subject for Silicon Valley. We're likely to see a massive shift toward "Sovereign AI." This means the government will likely fund the development of its own models, built by defense contractors like Palantir or Lockheed Martin, rather than renting them from startups.

  • Startups will lose their "safe" status.
  • Procurement rules will become impossibly strict.
  • Open-source models might actually win because the government can audit the code.

Anthropic isn't the only one in the crosshairs. If the Treasury is dumping them, you can bet the Department of Defense and the State Department are looking at their own contracts with a magnifying glass.

Practical steps for tech leaders and investors

If you're running a company that relies on these tools, or if you're an investor in the space, you need to change your strategy. Don't assume that "too big to fail" applies to AI contracts.

  1. Audit your data flow. If you’re a government contractor, you should stop using public AI APIs for sensitive work immediately. Start looking at local deployments of models like Llama 3 or other open-source variants that you can run on your own hardware.
  2. Diversify your providers. Relying on one model—whether it's Claude or GPT-4—is a massive risk. The Treasury's move proves that access can be cut off overnight for reasons that have nothing to do with the quality of the software.
  3. Watch the hardware. The next phase of this directive will likely target where the chips come from and where the data is stored. If your AI provider isn't using US-based, secure-facility hardware, they're a liability.

The era of the "move fast and break things" AI startup in the federal government is over. The Treasury wants stability, secrecy, and total control. Anthropic couldn't give them that. Now, the rest of the industry has to figure out if they can, or if they'll be left out in the cold too.

MH

Marcus Henderson

Marcus Henderson combines academic expertise with journalistic flair, crafting stories that resonate with both experts and general readers alike.