Skip to content
Carlos KiK
Go back

Anthropic Said No to the Pentagon. Then OpenAI Said Yes.

There are moments when an entire industry fractures along a fault line nobody saw coming. This was one of them.

Anthropic CEO Dario Amodei publicly refused to allow Claude in mass surveillance or autonomous weapons applications. Clear. Unambiguous. The kind of statement that either builds trust or kills revenue. Possibly both.

Days later, OpenAI signed a deal allowing its models in classified military contexts.

ChatGPT uninstalls spiked 295%. Claude shot to #1 in the App Store.

What just happened

Two companies, built by people who used to work together at the same organization, made diametrically opposite decisions about the same question: should AI be used to kill people?

Anthropic said no. OpenAI said yes (or at least: “let’s talk about it in a classified setting”).

The market responded instantly. Not with think pieces. With uninstalls.

Why this matters more than the usual corporate ethics theater

Most corporate ethics statements are press releases disguised as principles. You can tell because they cost nothing. “We believe in responsible AI” is free. Saying no to Pentagon money is not free. That is the difference.

Amodei’s decision has a price tag. Defense contracts are among the most lucrative in tech. Turning that down is not a PR move. It is a business decision with real revenue consequences.

Whether you agree with his position or not, the fact that it cost something makes it worth paying attention to.

The question nobody is asking

Forget the ethics debate for a moment. Think about it from a purely technical perspective.

These models hallucinate. They make confident mistakes. They can be jailbroken by a determined teenager with a weekend.

And someone wants to put them in weapons systems?

I am not making a moral argument here. I am making an engineering argument. Would you deploy software that sometimes confidently generates wrong answers into a system where “wrong answer” means “wrong target”?

I have been shipping software for 30 years. The first rule I learned: never deploy what you cannot fully control. We are nowhere near full control of these models. We are not even close to understanding why they sometimes fail.

Maybe the Pentagon should talk to an engineer before they talk to a CEO.


Sources: NeuralBuddies, WorldPolicyHub


Share this post on:

Previous Post
Unsung Hero: Sophie Wilson Designed the Chip in Your Phone
Next Post
Meta Is Firing 15,000 Humans to Build Better Machines