this post was submitted on 04 Mar 2026
10 points (100.0% liked)

Technology

1386 readers
21 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
 

This paper describes a method by which an LLM can be used to hide a text, list, or programming code using a prompt with the right key, by masking it as an output text of the same token size. The same method can also be used for jailbreaks, i.e. circumventing censorship of the answers given by the model

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here