this post was submitted on 31 Mar 2026
452 points (99.8% liked)

Technology

83304 readers
3381 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Encephalotrocity@feddit.online 299 points 2 days ago (8 children)

Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.

[–] UnderpantsWeevil@lemmy.world 1 points 18 hours ago

Laws written by whom?

Legislators were gobbled up by tech lobbyists back under Bush/Obama. Nobody was going to pitch legislation than ran afoul of trillion dollar rampaging corporate behemoths.

[–] merc@sh.itjust.works 122 points 1 day ago (1 children)

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

This is so incredibly stupid.

You've tried security.

You've tried security through obscurity.

Now try security through giving instructions to an LLM via a system prompt to not blow its cover.

[–] a4ng3l@lemmy.world 14 points 1 day ago

In Europe we have the AI act which, as of August, will introduce some form of transparency obligations. Not perfect obviously but a start. Probably will not be followed by the rest of the world though so like GDPR it will be forcibly eroded by other’s interests through lobbying but at least we try.

[–] pemptago@lemmy.ml 9 points 1 day ago

Haven't read the article and have a limited knowledge of ai, but I wonder if they do this for reinforcement learning: So OSS PR responses can be used to label different weights and models. Using even more free labor to train their models.

[–] JohnEdwa@sopuli.xyz 8 points 1 day ago (1 children)

With how massive of a computer science field artificial intelligence is and how much of it already is or is getting added to every piece of software that exists, a label like that would be equally useless as the California prop 65 cancer warnings.

Do you use a mobile keyboard that supports swipe typing and has autocorrect? Remember to mark everything you write as being AI assisted.

[–] mrbutterscotch@feddit.org 4 points 1 day ago

Well yes, if you let autocorrect write code contribution, I think you should lable that contribution as AI.

That doesn't sound like it is saying don't identify yourself. That it's called claude isn't internal information. So it doesn't seem that instruction is doing tpwhat you are saying. Must be more instructions.

[–] GhostlyPixel@lemmy.world 1 points 1 day ago

What internal info are they worried about leaking in a commit message? If you don’t want it to add the standard Claude attribution, you can completely disable it in the settings, or just write your own commit messages.