this post was submitted on 16 May 2026
238 points (98.0% liked)

Programming

26951 readers
721 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

In case you missed it, ChatGPT 5.1 had a tendency to talk about "goblins" in its responses. Supposedly this was a result of training a "nerdy" personality, but it bled into the model as a whole. Because the training run for the latest model already had this flaw, they had to add specific instructions to the system prompt for their Codex coding tool to avoid this behaviour.

Here's the full prompt from their github. In fact, they repeated the goblin instructions twice, cos you know that will definitely fix it. It's an interesting read if you consider each one of these instructions were meant to prevent some undesired behaviour: https://paste.sh/Iev3HtMe#JZ4dw_CkvJcpVmjjoy7WZnSn

More info here: https://news.northeastern.edu/2026/05/06/chatgpt-goblins-problem-ai-behavior/

OpenAI's own blog post casually explaining why they couldn't predict that their state of the art model would obsess about goblins: https://openai.com/index/where-the-goblins-came-from/

you are viewing a single comment's thread
view the rest of the comments
[–] theunknownmuncher@lemmy.world 36 points 21 hours ago* (last edited 21 hours ago) (1 children)

I still can't get over how the only fine tuning you can do for an LLM is yell at it with markdown files.

It isn't.

We should be able to retrain local models so they can develop an actual experience without prefilling the context.

Great news, you can do exactly that.

[–] jdr@lemmy.ml 11 points 19 hours ago (1 children)
[–] theunknownmuncher@lemmy.world 17 points 19 hours ago (3 children)

Yeah. It's proprietary. And you can't modify the Windows 11 source code, either.

[–] cecilkorik@piefed.ca 5 points 10 hours ago (1 children)

But Microsoft can modify the Windows 11 source code. Or at least they used to be able to, before AI.

OpenAI should be able to re-train its poorly trained model. But of course it can't, that would take months, maybe years of datacenter time.

Now OpenAI since can't even re-train their own models, they resort to chastising it in its own system prompt.

This is the problem. If you're trying to imply this is normal and expected, it shouldn't be. It needs not to be. We cannot accept this as the normal way of doing things going forward. It is awful, and painfully stupid.

[–] theunknownmuncher@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

OpenAI should be able to re-train its poorly trained model. But of course it can't, that would take months, maybe years of datacenter time.

Why speak on subjects that you clearly have no knowledge or experience with?

Training is checkpointed and can be continued without retraining. Finetuning a model that has already been trained is a different process from training, and does not take months or years of datacenter time.

But Microsoft can modify the Windows 11 source code. Or at least they used to be able to, before AI.

Huh? It takes way more time and effort to develop new features and changes for software like Windows.

[–] kurwa@lemmy.world 7 points 15 hours ago

Not with that attitude!

[–] Ziglin@lemmy.world 5 points 17 hours ago

Windows 11 isn't running in the cloud yet though. Unless it checks to make sure it hasn't been tampered with too much you should just be able to modify some of its binaries (the source code obviously isn't available). With the cloud based llms that is not possible.

If you have a model on your computer you can retrain it, which is like changing a binary just far less precise. The option of having a source code equivalent just isn't there beyond having the same dataset and seeds for the training program.

So I'd say it is worse than your average run of the mill proprietary software.