top of page

Anthropic Banned From Government Use: The Defense Production Act, AI Safety, and What Florida Businesses Need to Know

  • Writer: Carolina Nunez
    Carolina Nunez
  • 2 days ago
  • 11 min read
Anthropic Claude AI logo representing artificial intelligence model involved in Defense Production Act dispute

Orlando technology law attorney Carolina Nunez advising on AI regulation and federal government contracts | Attorney Carolina Nunez | The Law Offices of Carolina Nunez

by Orlando Crypto and Technology Law Attorney Carolina Nunez, Esq.


Update: As of today, February 28, 2026, OpenAI's CEO Sam Altman announced their partnership with the Department of War.


On February 27, 2026, a federal directive required all U.S. agencies to immediately discontinue use of Anthropic’s technology, including its artificial intelligence model, Claude.


The directive came after weeks of escalating tension between Anthropic and the Department of Defense over the company’s refusal to remove two safety guardrails from its government AI contract: prohibitions against using Claude for mass domestic surveillance of American citizens and for fully autonomous lethal weapons systems without human oversight.


Defense Secretary Pete Hegseth had set a deadline of 5:01 p.m. ET on February 27 for Anthropic to comply or face consequences. Anthropic refused. The standoff has now escalated into the most significant clash between the federal government and a private technology company since the post-September 11 surveillance debates.


At The Law Offices of Carolina Nunez, P.A., Attorney Carolina Nunez advises businesses, developers, and technology professionals throughout Orlando, Winter Park, Daytona Beach, and Central Florida on technology law, AI governance, government contracting, and digital asset law. This article breaks down the legal framework behind the Pentagon’s threats, the Defense Production Act. (407) 900-FIRM to speak with a technology attorney about how this dispute may affect your business or government contracts.


DISCLAIMER: This article provides general legal information about federal technology law, the Defense Production Act, and AI regulation. It does not constitute legal advice. Consult directly with a qualified attorney for guidance tailored to your situation. The facts reported here reflect publicly available news as of February 27, 2026, and the situation is rapidly evolving.





U.S. Defense Secretary Pete Hegseth during federal announcement related to AI government contracting dispute

What Happened: The Pentagon-Anthropic Showdown



The $200 Million Contract

In July 2025, Anthropic, the San Francisco-based company behind the Claude AI model, signed a contract worth up to $200 million with the U.S. Department of Defense. Under that agreement, Claude became the first AI model authorized to operate on the military’s classified networks. The contract included an acceptable use policy that contained two red-line restrictions Anthropic insisted upon: Claude would not be deployed for mass surveillance of American citizens, and Claude would not be used in fully autonomous weapons systems, meaning weapons that select and engage targets without any human being in the decision loop.



Hegseth’s Ultimatum

On February 24, 2026, Defense Secretary Pete Hegseth met directly with Anthropic CEO Dario Amodei at the Pentagon. At that meeting, Hegseth demanded that Anthropic sign a revised contract granting the military access to Claude for “all lawful purposes” — effectively removing both safety restrictions. Hegseth set a hard deadline: 5:01 p.m. ET on Friday, February 27, 2026. If Anthropic did not comply, the Pentagon threatened three separate consequences:




Anthropic Refuses to Comply

On February 26, Anthropic CEO Dario Amodei issued a public statement: “We cannot in good conscience accede to their request.” Amodei acknowledged the Pentagon’s authority over military decisions but maintained that in a narrow set of cases — mass surveillance and autonomous lethal weapons that AI can undermine rather than defend democratic values. Anthropic reported that the Pentagon’s revised contract language, delivered overnight, was “framed as compromise” but contained “legalese that would allow those safeguards to be disregarded at will.”



Trump’s Executive Action

On February 27, 2026, just over an hour before Hegseth’s deadline expired, President Trump announced via Truth Social that he was directing every federal agency to immediately cease all use of Anthropic’s technology. He gave the Pentagon a six-month phase-out period, recognizing that Claude is already deeply embedded in military classified systems. Trump warned of “major civil and criminal consequences” if Anthropic fails to cooperate during the transition. Shortly after, Secretary Hegseth announced the Pentagon was designating Anthropic as a supply chain risk to national security, ordering all contractors and partners to sever commercial ties with the company.





Defense Production Act of 1950 statute text codified at 50 U.S.C. § 4501 outlining presidential authority over national defense contracts

The Defense Production Act of 1950: Explained


The Defense Production Act (50 U.S.C. § 4501 et seq.) is a Cold War-era federal law signed by President Harry Truman in 1950 during the Korean War. It grants the President sweeping authority to direct private companies to meet the needs of national defense.


The law is codified in Title 50 of the United States Code, Chapter 55, and remains one of the most powerful executive tools in the American legal system.



Key Provisions


  • Title I (50 U.S.C. § 4511): Authorizes the President to require private companies to prioritize and accept government contracts deemed necessary for national defense. Companies that receive such orders must fulfill them ahead of other customers.

  • Title III (50 U.S.C. § 4531 et seq.): Allows the government to use loans, direct purchases, and financial incentives to expand production of goods critical to national defense.

  • Title VII (50 U.S.C. § 4551 et seq.): Provides enforcement mechanisms, authorizes investigations, and allows the government to establish voluntary agreements with private industry.

  • Section 103 (50 U.S.C. § 4513): Establishes criminal penalties, fines up to $10,000 or imprisonment, for any person who willfully fails to perform any act required by a DPA order.



How the DPA Has Been Used Before

The Defense Production Act has been invoked during virtually every major national emergency since its passage. President Trump invoked it during the COVID-19 pandemic to boost production of ventilators and personal protective equipment.


President Biden used it to address the 2022 baby formula shortage and to accelerate domestic production of electric vehicle batteries and critical minerals. However, the law has not been used to compel a technology company to alter or remove safety features from a software product.



Can the Government Force Anthropic to Remove AI Safeguards?

This is the central legal question. Under Title I of the DPA, the government can order a company to prioritize a government contract and produce goods it already manufactures. However, legal scholars note several significant limitations:


  • A company may object if the government is ordering it to produce something it does not already make. Anthropic does not currently produce an unrestricted version of Claude without safety guardrails.

  • A company may challenge DPA orders it deems unreasonable, and Anthropic has indicated it considers removing safety features to be unreasonable and potentially dangerous.

  • First Amendment concerns arise if the government compels the alteration of software code, which courts have recognized as protected speech in certain contexts (Bernstein v. U.S. Dept. of Justice, 9th Cir. 1999).

  • If Anthropic refuses a formal DPA order and the matter proceeds to litigation, it would test entirely new legal ground at the intersection of national security authority, technology regulation, and corporate rights.




What “Supply Chain Risk” Means for Government Contractors


The Pentagon’s designation of Anthropic as a supply chain risk carries consequences that extend far beyond the loss of a single contract. Under federal acquisition regulations (FAR), when a company is labeled a supply chain risk, every defense contractor, subcontractor, and partner that does business with the Pentagon must certify that their operations, products, and workflows do not rely on that company’s technology.


For Anthropic, whose Claude model is embedded in enterprise systems across healthcare, finance, legal technology, and cybersecurity, this designation could trigger a cascading effect:


  • Major defense contractors (Palantir, Booz Allen, Raytheon, Lockheed Martin, and others) that currently integrate Claude or Anthropic APIs into their workflows may be forced to prove they have eliminated all Anthropic dependencies.

  • Enterprise software companies that license Anthropic technology and also serve government clients could face a choice between maintaining their Pentagon relationships or their Anthropic partnerships.

  • Startups and small businesses building products on Anthropic’s API who also pursue government contracts may find themselves locked out of federal procurement opportunities.


For Florida technology companies and government contractors, particularly those in the Orlando and Central Florida defense corridor, this designation demands immediate attention. If your company uses Claude or any Anthropic product in any capacity, and you also hold or pursue federal contracts, you should consult with a technology attorney now to assess your exposure and compliance obligations.




AI-powered mass surveillance system monitoring digital communications and location data without individualized warrants

Legal Questions:
Surveillance, Autonomy, and the Bill of Rights


Anthropic’s two red lines of no mass domestic surveillance and no fully autonomous weapons are not arbitrary corporate preferences. They touch on fundamental constitutional protections that every American and every Florida business should understand.



Mass Surveillance and the Fourth Amendment

The Fourth Amendment to the United States Constitution protects Americans against unreasonable searches and seizures and generally requires a warrant supported by probable cause before the government can conduct surveillance of its citizens. The Electronic Communications Privacy Act (18 U.S.C. § 2510 et seq.) and the Foreign Intelligence Surveillance Act (50 U.S.C. § 1801 et seq.) impose specific procedural safeguards on government surveillance programs.


AI-powered mass surveillance, where an algorithm monitors millions of Americans’ communications, locations, and behaviors without individualized suspicion, raises serious constitutional concerns that federal courts have not yet fully addressed. Anthropic’s guardrail was designed to prevent Claude from becoming the backbone of such a system.



Autonomous Weapons and the Laws of Armed Conflict

International humanitarian law, including the Geneva Conventions and the principles of distinction, proportionality, and military necessity, generally requires a human decision-maker in the chain of lethal force.


A fully autonomous weapons system that uses AI to select and engage targets without human approval raises profound questions under both international law and the U.S. Department of Defense’s own Directive 3000.09, which has historically required “appropriate levels of human judgment” over the use of lethal force. Anthropic’s second guardrail sought to ensure Claude would not enable a weapon system to kill without a human being making the final call.



Compelled Speech and the First Amendment

A less obvious but legally significant issue is whether using the Defense Production Act to force Anthropic to alter its AI model constitutes compelled speech under the First Amendment. Federal courts have recognized in multiple contexts that software code is a form of expression protected by the First Amendment. If the government orders Anthropic to rewrite Claude’s safety instructions — its “constitution,” as the company calls it — that raises a novel question about whether the state can compel a private company to express something it fundamentally disagrees with.




Industry Response: OpenAI, Google, and the AI Safety Debate

Anthropic does not stand alone. Within hours of Amodei’s public refusal, the AI industry began rallying around the same principles:


  • OpenAI CEO Sam Altman told employees in an internal memo that OpenAI shares Anthropic’s red lines and would push for the same restrictions on mass surveillance and autonomous weapons in its own classified-systems negotiations with the Pentagon.

  • More than 100 Google engineers signed a letter to Google’s chief scientist, Jeff Dean, requesting similar limits on how the company’s Gemini AI models are used by the U.S. military

  • Altman told CNBC on February 27 that he does not believe the Pentagon should threaten AI companies with the Defense Production Act, stating that companies should work with the military “as long as it is going to comply with legal protections.”


Meanwhile, Elon Musk’s xAI (maker of the Grok model) became the second company cleared for classified military systems. Musk sided with the Trump administration, writing on his social media platform that “Anthropic hates Western Civilization.” The split within the tech industry underscores the reality that how AI companies negotiate with the federal government will shape technology regulation for decades to come.





What This Means for Florida Businesses and Technology Companies



Whether you are a defense contractor in Orlando, a software developer in Winter Park, a startup founder in Tampa, or a solo practitioner using AI tools in your law or medical practice, this dispute has real consequences for your business:


1. Audit Your AI Supply Chain

If you hold or pursue any federal or state government contract, identify every point in your technology stack where Anthropic or Claude is used, directly or through third-party integrations. The supply-chain risk designation may require you to certify that Anthropic technology is absent from your operations.



2. Review Your Terms of Service

The core lesson of this dispute is that acceptable use policies and terms of service in technology contracts are not just legal boilerplate, they can become the subject of billion-dollar standoffs with the federal government. If your company licenses AI technology, review those agreements now with a qualified technology attorney to understand what restrictions exist and how they might affect your obligations to your own clients.



3. Understand the Precedent Being Set

If the government successfully uses the Defense Production Act to force a technology company to alter its software, it would be the first time in the Act’s history that such power has been applied to intellectual property and software code rather than physical goods. That precedent would affect every SaaS company, every AI developer, and every cloud services provider in the United States, including those right here in Florida.



4. Protect Your Digital Assets

This dispute also highlights the growing importance of digital asset protection in both business and personal contexts. At The Law Offices of Carolina Nunez, P.A., we help Florida families and businesses protect their cryptocurrency, digital accounts, software licenses, and intellectual property through comprehensive estate planning and business planning strategies.





Timeline: Key Dates in the Anthropic-Pentagon Dispute


  • July 2025: Anthropic signs $200 million defense contract; Claude becomes first AI on classified military networks.

  • February 24, 2026: Secretary Hegseth meets Anthropic CEO Amodei at the Pentagon; demands removal of safety guardrails; sets Friday deadline.

  • February 26, 2026: Pentagon delivers “best and final offer” overnight; Anthropic CEO publishes public statement refusing to comply.

  • February 27, 2026 (3:00 p.m. ET): President Trump orders all federal agencies to immediately cease using Anthropic technology; grants Pentagon six-month phase-out.

  • February 27, 2026 (5:01 p.m. ET): Pentagon deadline expires; Secretary Hegseth designates Anthropic a supply chain risk to national security.





Frequently Asked Questions: The Defense Production Act and AI



What is the Defense Production Act?

The Defense Production Act of 1950 (50 U.S.C. § 4501 et seq.) is a federal law that gives the President authority to compel private companies to prioritize government contracts, expand production of goods needed for national defense, and take other actions necessary for the country’s security. It was originally enacted during the Korean War but has been reauthorized and used many times since, most recently during the COVID-19 pandemic.



Can the government force an AI company to remove safety features?

This is untested legal territory. The DPA has historically been used to compel the production of physical goods — ventilators, ammunition, semiconductors. Legal experts believe that if the government attempts this, it could face constitutional challenges under the First Amendment (compelled speech) and the Fifth Amendment (due process). Litigation between Anthropic and the government is widely considered likely.



How does the supply chain risk designation affect my business?

If you are a government contractor or subcontractor, you may be required to certify that your products and workflows do not incorporate any Anthropic technology. If you currently use Claude or Anthropic’s API in any part of your operations, you should consult a technology attorney to determine your compliance obligations.



What does this mean for other AI companies?

OpenAI’s Sam Altman has publicly stated that his company shares Anthropic’s red lines on mass surveillance and autonomous weapons. Google employees have asked for similar restrictions. The outcome of this dispute will set the template for how every major AI company negotiates with the federal government going forward.





Florida Tech and Crypto



Carolina Nunez, Orlando AI and technology law attorney, with her dog | Attorney Carolina Nunez | The Law Offices of Carolina Nunez

The Anthropic-Pentagon standoff is not just a national news story, it has direct legal and business implications for Florida technology companies, government contractors, AI developers, and every business that relies on AI tools in its daily operations. If you need guidance on government contracting compliance, AI acceptable use policies, digital asset protection, or technology licensing, contact The Law Offices of Carolina Nunez, P.A. today.


Call (407) 900-FIRM or click here to schedule a consultation. Our firm serves clients in Orlando, Winter Park, Daytona Beach, Kissimmee, Sanford, and throughout Central Florida.



Read more from Attorney Carolina Nunez:


bottom of page