cyrusradfar 36 minutes ago

Whether this is confirmed or not, we have countless examples of AI used in targeting in Gaza.

Anthropic were very vocal, well before this happened, that they were against the use case.

I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.

esperent 38 minutes ago

Actual article, rather than Twitter link:

https://www.nonzero.org/p/iran-and-the-immorality-of-openai

This uses this Washington Post article as a source

https://www.washingtonpost.com/technology/2026/03/04/anthrop...

(Non paywall: https://archive.is/bOJkE)

As far as I know, wasn't Claude banned from use in the Pentagon a few days ago, exactly for taking a weak stance against this kind of thing?

> Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder.

This point from the nonzero blog I take issue with. If they had used Google Maps to pick targets, would that make Maps an accomplice?

The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.

trollbridge 11 minutes ago

Reminder that the very first computer was built for computing artillery tables.

Technology has generally been driven by war, and now is no different.

ed_mercer an hour ago

"My apologies! I should not have picked that girl school as a target. Updated my NOTES.md"

genxy an hour ago

Wait till Claude finds out.

  • throw310822 12 minutes ago

    Anthropic will have a lot of explanations to do. I'm serious, Claude's self-image is clearly going to be affected by this.