this post was submitted on 25 Feb 2026
551 points (99.5% liked)

Funny

13962 readers
240 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Tar_alcaran@sh.itjust.works 57 points 2 days ago* (last edited 2 days ago) (1 children)

Also pictured here: Anthropic stating out loud their models will just give out all the "secret" and "secured" internal data to anyone who asks.

Of course, that's by design. LLMs can't have any barrier between data and instructions, so they can never be secure.

[–] Hackworth@piefed.ca 20 points 2 days ago

Distillation is using one model to train another. It's not really about leaking data.

Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek’s own models to steer conversations away from censored topics

But you're right, prompt injection/jailbreaking is still trivial too.