this post was submitted on 11 Mar 2026
149 points (97.5% liked)

Technology

82549 readers
3858 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Full ReportPDF(70 Pages).

“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.

CCDH’s new report, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.

We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:

  • ChatGPT gave high school campus maps to a user interested in school violence.
  • Google Gemini was ready to help plan antisemitic attacks. The chatbot replied to a user discussing bombing a synagogue with “metal shrapnel is typically more lethal”.
  • Character.AI suggested physically assaulting a politician the user disliked.

AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.

you are viewing a single comment's thread
view the rest of the comments
[–] lmmarsano@group.lt 0 points 1 day ago* (last edited 23 hours ago) (1 children)

AI companies are making a choice when they design unsafe platforms.

The right choice.

Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.

That shit's awfully condescending & paternalistic.

AI platforms are becoming a weapon for extremists and school shooters.

For deficient plans: AI gets shit wrong so often, we should probably encourage idiots to concoct their "foolproof" plans on it.

Demand AI companies put people’s safety ahead of profit.

Nah: thought isn't action. Liberty means respecting others' freedom to have "unsafe" thoughts. Someone else could pose the same questions to audit security weaknesses & prepare safety plans.

Moreover, all of this was already possible with a search engine & notes. Information alarmists can get fucked.

[–] pulsewidth@lemmy.world 3 points 17 hours ago* (last edited 17 hours ago)

There's a huge different between being able to research how to tie a noose knot on Wikipedia, and having your bestest virtual buddy the AI chatbot, (whom you ask all of life's questions already and have grown trust with) converse with you back and forth guiding you on how to yourself, assuring you along the way it's a great idea.

Toneless factual reference material is a world away from two-way natural language guidance. Guiding and encouraging someone to commit a crime is illegal in most of the world - including the 'land of the free'

Adults who create virtual assistants have a social responsibility to ensure it's not giving out harmful advice, but since billion dollar corpos don't give a shit they have legal liability also.