President Donald Trump said on February 27, 2026 that he directed every federal agency to “immediately” stop using technology from Anthropic, the AI company behind the Claude models. Trump announced the directive in a social media post and paired it with a warning that he expects cooperation during a transition away from the company’s tools.

The directive landed at the end of a fast-moving dispute between Anthropic and the Pentagon over how the military can use powerful generative AI systems. The Pentagon pushed Anthropic to accept contract language that would allow “all lawful” military uses, while Anthropic sought explicit guardrails that block mass domestic surveillance and fully autonomous weapons.

Within hours, the General Services Administration (GSA) said it would remove Anthropic from key federal procurement channels, including the Multiple Award Schedule (MAS) and a government AI testing platform called USAi.gov.

At the Pentagon, Defense Secretary Pete Hegseth said he would treat Anthropic as a “supply-chain risk,” a designation that can reach beyond direct government use and affect contractors that support defense work.

What comes next depends on implementation details that agencies, contractors, and vendors will need to clarify quickly: which Anthropic products fall under the ban, how fast agencies must unwind deployments, and how broadly the supply-chain risk designation applies across the defense industrial base.

What Trump said he ordered, and what it likely changes immediately

Trump said he directed “EVERY Federal Agency” to cease using Anthropic technology right away. He also described a transition period for parts of government that already rely on Anthropic in embedded or higher-stakes deployments, including national security use cases. Reuters reported that Trump allowed a six-month phaseout for the Defense Department and other agencies that use the company’s products.

Multiple outlets also reported that the Pentagon set a specific deadline for Anthropic to accept revised terms and that Trump’s announcement came shortly before that deadline. NPR reported that defense officials set the cutoff at 5:01 p.m. ET on February 27, 2026.

A government-wide shift away from any widely used enterprise AI tool creates immediate operational questions, even before agencies publish formal guidance:

Agencies must inventory where they use Anthropic models and services, including chat assistants, summarization tools, analytics workflows, internal knowledge search, and code assistance.

Agencies must assess whether Anthropic runs inside vendor platforms (for example, as a model option in a broader “AI gateway”) or whether systems call Anthropic APIs directly.

Agencies must decide what they can shut off quickly versus what requires continuity planning, testing, and security re-authorization before they swap in a replacement.

Defense and intelligence users face extra complexity when models operate on classified networks or inside mission-support tooling. Wired reported that Anthropic’s “Claude Gov” models support tasks like writing reports and summarizing documents, while also supporting intelligence analysis and planning in some contexts.

Because Trump announced the directive in a social media post, agencies and contractors will likely look for follow-on documents that translate the announcement into procurement and cybersecurity actions. Even without a single formal order, agencies can move quickly through contract modifications, stop-use directives, and security policy updates—especially when GSA and the Pentagon take parallel steps.

GSA’s move: pulling Anthropic from federal buying channels

GSA plays a central role in how agencies buy commercial technology. On February 27, 2026, GSA said it would remove Anthropic from USAi.gov and from its Multiple Award Schedule (MAS), which the agency describes as a primary procurement vehicle that offers pre-negotiated terms for commercial products and services.

GSA also described USAi.gov as a secure, cloud-based evaluation suite that agencies can use to test and deploy AI models from providers at no cost, positioning it as a centralized sandbox for experimentation and rollout.

Those actions matter for two reasons:

They can tighten access even for agencies that do not run Anthropic directly, because procurement vehicles often serve as the “default path” for buying and renewing enterprise software.

They can influence vendor ecosystems. When MAS access changes, resellers, integrators, and cloud partners often adjust offerings to reduce compliance risk.

In practical terms, GSA’s step signals that the administration intends to operationalize the president’s directive through the purchasing infrastructure that agencies use every day, not only through agency-by-agency decisions.

The Pentagon’s escalation: “supply-chain risk” and its spillover effects

Trump’s directive focused on federal agencies’ own use. The Pentagon’s supply-chain risk move potentially reaches further, because it can affect contractors and suppliers that support defense missions.

Reuters reported that the supply-chain risk designation typically applies to firms in adversary nations and that it can mean defense contractors may lose the ability to deploy Anthropic AI as part of work for the Pentagon. Reuters also noted that the defense industrial base includes tens of thousands of contractors.

NPR described the outcome in similar terms, reporting that Hegseth’s post said no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.

The Washington Post framed the designation as a “far-reaching” blacklist that blocks agencies and contractors from doing business with the company, underscoring how unusual it looks when applied to a leading U.S. AI lab.

Even with those statements, a key uncertainty remains: how enforcement will work across complex contracting chains. NPR reported that experts viewed the designation as unusual and noted uncertainty about how far the restriction could extend, including whether it would limit contractors only for Pentagon work or restrict broader use.

That ambiguity matters because modern defense software supply chains often involve layered subcontractors, cloud platforms, and third-party model providers. A broad interpretation can trigger fast “de-risking” behavior, where contractors eliminate any dependency that might jeopardize eligibility for future government work.

Why this happened: the fight over AI guardrails in military use

The immediate trigger came from a contract dispute over acceptable use rules for Anthropic’s models in military settings.

Anthropic’s position: The company sought explicit protections that prevent use of its AI for mass domestic surveillance and for fully autonomous weapons that can select and engage targets without meaningful human involvement. Reuters reported that Anthropic pursued guarantees against fully autonomous weapons and mass domestic surveillance, while the Pentagon said it had no interest in those uses.

The Pentagon’s position: Defense officials argued that the government should decide how it uses technology, as long as it stays within U.S. law. NPR reported that the Pentagon pushed AI companies, including Anthropic, to allow use “for all lawful purposes,” and a senior Pentagon official told NPR that legality rests with the Pentagon as end user.

The deadline and pressure tactics: NPR reported that the Pentagon set a 5:01 p.m. ET deadline on February 27 and warned that it would terminate its partnership and deem Anthropic a supply-chain risk if the company did not accept the terms.

Alongside the blacklist threat, the Pentagon also discussed the Defense Production Act (DPA) as a lever. NPR reported that the Pentagon threatened to invoke the Defense Production Act and that an expert described DPA use in this context as extraordinary and rare outside true emergencies. The Washington Post similarly reported earlier in the week that Pentagon officials considered invoking the DPA to compel access if Anthropic did not comply by the deadline.

From Anthropic’s perspective, the company described the combination of threats as contradictory—treating the firm as both too risky to use and too important to exclude. NPR reported that the government paired the supply-chain risk threat with a DPA threat and captured Anthropic’s view that the company could not agree “in good conscience” to the Pentagon’s request.

What is Anthropic’s technology, and why federal agencies used it in the first place?

Anthropic competes with other frontier model providers by offering Claude, a family of large language models that support chat, summarization, analysis, and other generative AI tasks. The company has marketed itself as “safety-focused,” and that brand posture shaped the current dispute.

Government use expanded because agencies increasingly treat generative AI as a productivity layer: summarizing large documents, drafting internal reports, supporting analysis workflows, and accelerating basic coding tasks. Wired reported that Claude Gov served routine tasks such as writing reports and summarizing documents, while also supporting intelligence analysis and planning in some settings.

Anthropic also gained an early foothold in sensitive national security contexts. Reuters reported that Claude sits in use across parts of the intelligence community and armed services and that Anthropic became an early frontier lab to place models on classified networks.

That footprint matters because replacing an AI model in a high-security environment requires more than swapping one API endpoint for another. Teams must validate model behavior, re-test workflows, re-do security controls, and sometimes re-train users. A six-month transition window may sound long in consumer tech, but it can feel short in government systems that run under strict change-control processes.

Competing narratives: “politicization” vs. “contractors shouldn’t set policy”

The dispute also carries a political dimension because it centers on who gets to define limits on military AI use: the government as customer and sovereign authority, or the vendor as creator and operator of the technology.

Administration framing: Trump and senior defense officials argued that Anthropic tried to impose its own terms on national security operations. Multiple reports described Trump’s public attacks on the company and his claim that the government should not accept constraints from a private tech firm.

Anthropic framing: The company argued that certain uses fall outside what current AI can safely and reliably do and that it supports national defense while drawing narrow red lines around mass surveillance and fully autonomous weapons. NPR reported Anthropic’s stance that those uses sit outside current technology’s safe bounds and that it did not try to object to specific operations in an ad hoc way.

Outside observers often focus less on today’s declared intentions and more on future drift. NPR reported that the Pentagon said it does not intend to use Claude for domestic surveillance or autonomous weapons, but it still demanded freedom to use AI for “all lawful purposes.” Critics worry that “lawful” can still include controversial uses depending on how future policy and interpretations evolve.

Reactions and pushback: lawmakers and experts raise concerns

Democratic Sen. Mark Warner criticized the directive, questioning whether national security decisions flowed from careful analysis or political considerations. Reuters reported Warner’s statement and his concerns about the rhetoric directed at the company. CBS News also reported Warner’s pushback in response to both Trump’s and Hegseth’s actions.

Experts also debated whether the conflict reflected real operational needs or a broader struggle over governance and precedent. Wired quoted a defense AI expert who characterized the dispute as avoidable and focused on theoretical use cases that do not sit “on the table for now.”

NPR reported that a senior fellow at the Center for a New American Security described the supply-chain-risk designation as something traditionally aimed at foreign adversary technology and described DPA use as extraordinary and rare.

The Washington Post also reported signs of broader industry concern, including employee activism and fears that a major escalation could complicate government relationships with other AI developers.

What the ban could mean for agencies, contractors, and AI vendors

A policy that forces agencies to stop using a major AI provider creates three tiers of impact: immediate operational disruption, medium-term procurement reshuffling, and longer-term shifts in how vendors negotiate “acceptable use” boundaries with government customers.

1) Operational disruption and transition risk

Agencies that used Claude for basic productivity work may switch faster, because those use cases often run on unclassified systems and do not require deep integration. Agencies that embedded Anthropic tooling into mission-support pipelines or higher-security environments may need more time and more funding to transition safely.

Reuters reported that Anthropic’s tools already saw use across national security contexts, which raises the likelihood that some systems will require careful offboarding. Defense One also noted that replacing Anthropic tools across government could take months or longer, reinforcing that agencies may face real schedule pressure during the transition period.

2) Procurement reshuffling and a tighter vendor ecosystem

GSA’s removal of Anthropic from MAS and USAi.gov signals that procurement levers will drive the transition, not only agency CIO choices. That shift can change the competitive landscape inside federal IT, where “approved pathways” often matter as much as raw product quality.

Agencies that want AI capabilities will likely expand reliance on alternative providers already positioned for government work. Reuters noted that other major labs have sought defense business, reflecting a broader trend of increased Silicon Valley engagement with Washington.

3) Vendor negotiations over military AI “red lines”

The biggest strategic question may involve precedent: can a frontier AI vendor maintain enforceable restrictions on certain uses when the customer is the U.S. government?

NPR framed the dispute as a broader sticking point about whether AI companies can set restrictions on government use. If future contracts standardize “all lawful use” language across vendors, companies that emphasize safety-based restrictions may face pressure to soften policies—or lose access to a major, prestige-heavy customer segment.

At the same time, heavy-handed government action could chill participation among some engineers and researchers who worry about weaponization or surveillance. Reuters referenced earlier historical tensions, including employee protests around Pentagon AI work at major tech firms, suggesting that workforce dynamics can influence corporate willingness to accept defense contracts.

The business stakes for Anthropic

Anthropic has pursued major enterprise and government customers while also preparing for a potential public offering. Reuters reported that the showdown comes at a sensitive moment as the company competes aggressively for business and explores an IPO timeline.

NPR reported that the Pentagon contract carries a ceiling of up to $200 million and described it as small relative to the company’s broader financial picture, while still noting that investor perception and partner deals could shift in response to the administration’s actions.

Even if the direct revenue hit stays limited, a government blacklist can create second-order effects:

Enterprise buyers may hesitate if they expect regulatory risk, reputational spillover, or supply-chain complications.

Cloud and defense partners may reduce co-selling or integrations if they fear losing eligibility for government work.

Competitors can use government approvals and “cleared for classified” status as marketing leverage, which matters in tightly regulated industries like defense, aerospace, and critical infrastructure.

Those dynamics often matter more than the initial contract dollars.

What happens next: the key questions to watch

The February 27 announcements created a clear political headline, but agencies and contractors will need answers to detailed implementation questions.

Will the White House issue a formal government-wide directive?

Trump announced the decision via social media, while agencies like GSA started implementing procurement actions. Watch for a formal memo, executive action, or acquisition guidance that defines scope, exceptions, and enforcement timelines.

How broadly will the supply-chain-risk restriction apply?

Reuters and NPR both described a path where defense contractors may lose the ability to use Anthropic tools in Pentagon work. A broader reading could pressure contractors to drop Anthropic entirely, even for non-defense lines of business, to avoid compliance uncertainty.

Will the Pentagon pursue the Defense Production Act path?

NPR and the Washington Post reported DPA threats as part of the pressure campaign. If the administration tries to use the DPA in this context, legal challenges could follow, and the dispute could expand beyond procurement into questions about executive authority and compelled access to privately developed AI systems.

Which vendors fill the gap?

Wired and Reuters both noted that other major AI companies also pursue defense and national security work, and government customers will likely shift usage toward providers that can meet mission needs under updated terms. The transition process will reveal whether agencies prioritize model quality, cost, security posture, or contract flexibility.

Bottom line

Trump’s directive to stop federal use of Anthropic technology, combined with the Pentagon’s supply-chain-risk designation and GSA’s procurement actions, marks one of the sharpest U.S. government interventions into the frontier AI vendor landscape to date.

The dispute started as a contract fight over AI guardrails in military use, but it now tests a larger question: how much control an AI developer can retain over downstream uses when the buyer holds national security authority and broad procurement power.