Re: Anthropic 'a supply chain risk'?
Posted: Wed Feb 25, 2026 4:33 pm
by Rideback
Joohn Choe:
"Here's what Other 98% isn't telling you
1. there is no lack of attention, I myself literally gave attention to this nine days ago


2. Anthropic grosses $14 BILLION A YEAR, they really don't care that much about $200 million
3. every other major AI company is already 100% OK with autonomous weapons and mass surveillance, none of that stops if Anthropic refuses to play ball
4. Claude is also widely regarded as one of the best models currently in use on classified systems, and is the only one approved for some settings
5. the supply chain threat designation is the real threat here, because that would mean that some of the largest companies in the U.S. - eight out of ten of the largest companies use Anthropic products - would have to make massive changes to purge Anthropic software out of all their systems if they want to contract with the Department of Defense
6. and if they can get away with doing that to a powerful, rich AI company by just calling it "woke" enough times, what stops them from doing that to literally ANYONE
Also, the text is copied, word-for-word with no attribution, from a tech & politics reporter named Karly Kingsley on Twitter.
Who, fun fact, follows me on there.
Re: Anthropic 'a supply chain risk'?
Posted: Wed Feb 25, 2026 3:02 pm
by PAL
Anthropic-do the right thing. Destroy it. No access can be had. Oh, but they have so much to lose. If they don't do as demanded think of all the money that will go down the tubes. No luxery cars, boats, houses for them!
And of course it will be said it is not that simple.
Re: Anthropic 'a supply chain risk'?
Posted: Wed Feb 25, 2026 11:30 am
by Rideback
"Hegseth Gives Anthropic 72 Hours to Surrender its Soul…
The headlines today are creating a confusing, high-stakes fog of war around Anthropic. On Tuesday, February 24, the company released a blog post announcing a major update to its "Responsible Scaling Policy" (RSP). In the fast-moving world of AI ethics, "updating a policy" is often corporate shorthand for "lowering the bar."
Critics are already pointing to this as a cave-in, but the reality is more like a tactical retreat into a very dangerous corner. Here is the breakdown of why this feels like a shift in the ground beneath our feet.
Anthropic has long positioned itself as the ethical holdout, the company with a "soul" that would rather lose money than lose its moral compass. But in their Tuesday announcement, they admitted that their previous, self-imposed guardrails were potentially hindering their ability to compete in a market where OpenAI and Google are moving at breakneck speed. They are moving away from hard, fixed limits toward a more "fluid" safety framework that can be adjusted as the market demands.
The timing is what makes this feel like a surrender. This policy shift landed on the exact same day that CEO Dario Amodei sat down for a "cordial" but tense meeting with Defense Secretary Pete Hegseth.
During that meeting, Hegseth reportedly treated the company's ethical guidelines like a bothersome suggestion. He compared the situation to Boeing, arguing that when the government buys a plane, the manufacturer doesn't get to tell the Pentagon how to fly it. He gave Amodei a Friday deadline: sign a document granting the military unrestricted access to Claude or face the consequences.
Those consequences are not just about a lost $200 million contract. Hegseth dangled two specific, "nuclear" options. The first is designating Anthropic a "supply chain risk." That label, usually reserved for hostile foreign entities, would effectively ban every other tech giant from using Claude if they want to keep their own government deals. It would be a corporate death sentence by bureaucracy.
The second threat is the invocation of the Defense Production Act. This would allow the government to essentially seize control of Anthropic's technology and use it however they see fit, regardless of the company's "Constitutional AI" principles. It is a "work for us or we will take it" ultimatum that leaves very little room for ethical maneuvering.
As of Wednesday morning, Anthropic is still officially holding its "red lines" on two fronts: no mass surveillance of U.S. civilians and no fully autonomous "kill" decisions. A spokesperson stated they are continuing "good faith conversations" to support national security within responsible limits.
But the new, "fluid" safety policy suggests the company is preparing the legal and corporate infrastructure to bend. If they can rewrite their own rules on a Tuesday, the Pentagon knows they can be pressured to rewrite them again on a Friday.
The question isn't just about Anthropic anymore. It's about whether "safety-forward" AI can even exist when the largest customer in the world is threatening to label ethics as a national security risk. For now, the "Soul of AI" is still there, but it is currently being held at gunpoint by the Department of Defense.
I don't put this behind a paywall. I never will. But if my work means something to you, the links are below. Every contribution keeps me independent and keeps this going."
https://coff.ee/brentmolnar
https://substack.com/@brentmolnar
https://PayPal.me/brentmolnar
https://venmo.com/u/Brent-Molnar
https://cash.app/$BrentMolnar
#AI #Anthropic #Pentagon #PeteHegseth #Privacy #Ethics #Claude #VoiceOfReason
Brent Molner
Re: Anthropic 'a supply chain risk'?
Posted: Tue Feb 17, 2026 9:34 am
by Rideback
Like most things in life, even the best conceived and engineered are only as successful as the end user's capabilities. Fresh in my mind is the episode last week of the DoD/W's newly developed laser loaned out to an incompetent CBP to play with.
Re: Anthropic 'a supply chain risk'?
Posted: Tue Feb 17, 2026 6:15 am
by mister_coffee
One thing that is missing here is that the overwhelming sentiment of engineers and scientists working in the field most anywhere, not just at Anthropic, is that developing AI powered weapons is a Very Bad Idea. And there are serious mathematical reasons why AI-enhanced mass surveillance probably can't even work very well.
All of the big players in this space are at near-parity. And any differences between them are more about packaging and marketing than their core technologies.
We are a long way from fulfilling all of the hype and fear surrounding this tech. And the more I use it and the more I learn about it the more convinced I am that it would be foolish to trust AI for anything serious.
On the other hand, the Ukrainians are apparently using AI tools in both battle management and autonomous weapons targeting with notable success. Although the details of how they made it all work are very murky.
Anthropic 'a supply chain risk'?
Posted: Mon Feb 16, 2026 3:44 pm
by Rideback
"The Pentagon is threatening to designate Anthropic a "supply chain risk", a punishment normally reserved for foreign adversaries, after months of failed negotiations over AI safeguards collided with the revelation that Claude was used during the January 3, 2026 military raid that captured Venezuelan President Nicolás Maduro.
This is the most consequential clash yet between Silicon Valley's AI safety commitments and the U.S. military's demand for unrestricted military AI, and it's worth examining to see what's going on and what it could mean.
TIMELINE: HOW WE GOT HERE
Claude's usage policy, as established in June 2024, sets forth several restrictions relevant to mass surveillance. Per the June 2024 guidelines (Anthropic), Claude should not be used to:
- Make determinations on criminal justice applications, including making decisions about or determining eligibility for parole or sentencing
- Target or track a person’s physical location, emotional state, or communication without their consent, including using our products for facial recognition, battlefield management applications or predictive policing
- Utilize models to assign scores or ratings to individuals based on an assessment of their trustworthiness or social behavior without notification or their consent
- Build or support emotional recognition systems or techniques that are used to infer emotions of a natural person, except for medical or safety reasons
- Analyze or identify specific content to censor on behalf of a government organization
- Utilize models as part of any biometric categorization system for categorizing people based on their biometric data to infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation
- Utilize models as part of any law enforcement application that violates or impairs the liberty, civil liberties, or human rights of natural persons
A section is also included on weapons. Claude users are prohibited from using Claude to:
- Produce, modify, design, or illegally acquire weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life
- Design or develop weaponization and delivery processes for the deployment of weapons
- Circumvent regulatory controls to acquire weapons or their precursors
- Synthesize, or otherwise develop, high-yield explosives or biological, chemical, radiological, or nuclear weapons or their precursors, including modifications to evade detection or medical countermeasures
It is notable in this context that although Claude's guidelines include a section specifically titled "Do Not Compromise Computer or Network Systems," Claude Code was nonetheless used in a hacking campaign by the People's Republic of China around September 2025 (Anthropic).
November 7, 2024 - Anthropic, Palantir, and AWS announced a three-way partnership to deploy Claude 3 and 3.5 models on Impact Level 6 (IL6) classified networks - the highest security classification for DoD systems (NASDAQ).
July 14, 2025 - Anthropic announced a two-year prototype Other Transaction Agreement (OTA) with a $200 million ceiling, awarded by the DoD's Chief Digital and Artificial Intelligence Office (CDAO), led by Doug Matty (Anthropic). The contract covered prototyping frontier AI capabilities for national security, adversarial AI risk forecasting, and technical exchanges. At the same time, CDAO awarded similar $200 million contracts to OpenAI, Google, and xAI - all four major frontier AI labs (AI.mil).
August 15, 2025 - Anthropic updated its Usage Policy, explicitly banning the use of Claude to aid in the development of chemical, biological, radiological, or nuclear weapons and explicitly forbidding the use of Claude to analyze biometric data to infer characteristics like race or religion, or for emotional analysis in interrogation contexts (Anthropic).
Also in August 2025, the CIA "quietly sent a small unit into Venezuela with the goal of providing 'extraordinary insight' into Maduro's movements, according to a person with knowledge of the matter. Even his pets were known to U.S. intelligence agents," Dan "Raizin" (yes, that is his nickname) Caine, chairman of the Joint Chiefs of Staff, said at a news conference (NBC).
January 3, 2026 - The U.S. launched Operation Absolute Resolve. The military bombed infrastructure across northern Venezuela to suppress air defenses while 200+ special operators from Delta Force and the FBI attacked Maduro's compound at Fort Tiuna in Caracas. Over 150 military aircraft launched from 20+ sites. Delta Force breached the residence; Maduro and his wife Cilia Flores were "taken completely by surprise" and flown to the USS Iwo Jima, then to Stewart Air National Guard Base in New York, then by helicopter to Manhattan (NBC).
Claude was used during the active operation. While Axios could not confirm the precise role Claude played in the capture, the Wall Street Journal reported that Claude was used during the operation itself, not just in preparations for it. Following the raid, an employee at Anthropic asked a counterpart at Palantir how Claude had been used, according to people familiar with the matter (WSJ; Axios).
January 15, 2026 - Secretary of War Pete Hegseth told a crowd at SpaceX headquarters, apparently referring to Claude (Defense Dept):
"We will not employ AI models that won't allow you to fight wars. We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We're building war ready weapons and systems, not chatbots for an Ivy League faculty lounge."
February 13, 2026 - The Wall Street Journal broke the story that Claude was used during the Maduro raid (WSJ). This was the first public confirmation of a commercial AI model being used in a classified military combat operation.
February 15, 2026 - Axios published the first exclusive, "Pentagon threatens to cut off Anthropic in AI safeguards dispute." The article detailed months of negotiations in which the Pentagon demanded the right to use Claude for "all lawful purposes" - presumably including lethal operations - while Anthropic insisted on two hard limits:
1. No mass surveillance of Americans
2. No fully autonomous weapons
Pentagon officials described Anthropic's restrictions as "unduly restrictive" with "all sorts of gray areas." A senior War Department official called Anthropic the most "ideological" of the AI labs. The article revealed that OpenAI, Google, and xAI had all agreed to remove their safeguards for unclassified military systems, and one of the three had already agreed to the full "all lawful purposes" standard (Axios).
February 16, 2026 - Axios published the escalated follow-up by Dave Lawler, Maria Curi, and Mike Allen: "Pentagon threatens to label Anthropic's AI a 'supply chain risk.'" The article revealed that Hegseth was "close" to designating Anthropic a supply chain risk - a penalty usually reserved for foreign adversaries - which would require every company doing business with the Pentagon to certify they don't use Claude. A senior Pentagon official stated: "It will be an enormous pain in the arse to disentangle, and we are going to make sure they pay a price for forcing our hand like this." Chief Pentagon spokesman Sean Parnell confirmed: "The Department of War's relationship with Anthropic is being reviewed" (Axios).
WHAT HAPPENS NOW
Before this escalated with Hegseth, Anthropic could have, realistically, simply walked away.
Anthropic already faces internal pressure from engineers over the Pentagon work, and the $200 million contract is about 1.4% of Anthropic's reported $14 billion in annual revenue. Claude is also the only AI model currently on classified networks, and senior administration officials admit that other models are "just behind" in government applications. All of this points to strong leverage on Anthropic's part.
The supply chain risk designation puts the situation at a new level, however. The potential impact extends far beyond the contract's cancellation - effectively, it would force eight of the ten largest U.S. companies, many of which also do business with the Pentagon, to purge Claude from all their systems (Axios).
We saw a similar drama with law firms early in the Trump administration. Paul, Weiss; Skadden; Kirkland & Ellis; Cadwalader; and Simpson Thacher all caved, committing to a combined $1 billion in pro bono legal work on behalf of the administration.
But Perkins Coie, Jenner & Block, and WilmerHale all defied the Trump administration, sued, and won.
The punitive nature of the supply chain risk designation, Anthropic's flamboyant leadership, and the apparent sentiment among rank-and-file developers who work on Claude all make it an interesting question how this is all going to shake out."
Joohn Choe
---