this post was submitted on 16 May 2026
221 points (97.8% liked)

Programming

26951 readers
746 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

In case you missed it, ChatGPT 5.1 had a tendency to talk about "goblins" in its responses. Supposedly this was a result of training a "nerdy" personality, but it bled into the model as a whole. Because the training run for the latest model already had this flaw, they had to add specific instructions to the system prompt for their Codex coding tool to avoid this behaviour.

Here's the full prompt from their github. In fact, they repeated the goblin instructions twice, cos you know that will definitely fix it. It's an interesting read if you consider each one of these instructions were meant to prevent some undesired behaviour: https://paste.sh/Iev3HtMe#JZ4dw_CkvJcpVmjjoy7WZnSn

More info here: https://news.northeastern.edu/2026/05/06/chatgpt-goblins-problem-ai-behavior/

OpenAI's own blog post casually explaining why they couldn't predict that their state of the art model would obsess about goblins: https://openai.com/index/where-the-goblins-came-from/

you are viewing a single comment's thread
view the rest of the comments
[–] sudo@programming.dev 48 points 20 hours ago (4 children)

I still can't get over how the only fine tuning you can do for an LLM is yell at it with markdown files. We should be able to retrain local models so they can develop an actual experience without prefilling the context.

[–] RamenJunkie@midwest.social 10 points 12 hours ago

How many extra tokens get burned with all this pre filled context I wonder.

[–] theunknownmuncher@lemmy.world 34 points 20 hours ago* (last edited 20 hours ago) (1 children)

I still can't get over how the only fine tuning you can do for an LLM is yell at it with markdown files.

It isn't.

We should be able to retrain local models so they can develop an actual experience without prefilling the context.

Great news, you can do exactly that.

[–] jdr@lemmy.ml 10 points 18 hours ago (1 children)
[–] theunknownmuncher@lemmy.world 15 points 17 hours ago (3 children)

Yeah. It's proprietary. And you can't modify the Windows 11 source code, either.

[–] cecilkorik@piefed.ca 4 points 8 hours ago

But Microsoft can modify the Windows 11 source code. Or at least they used to be able to, before AI.

OpenAI should be able to re-train its poorly trained model. But of course it can't, that would take months, maybe years of datacenter time.

Now OpenAI since can't even re-train their own models, they resort to chastising it in its own system prompt.

This is the problem. If you're trying to imply this is normal and expected, it shouldn't be. It needs not to be. We cannot accept this as the normal way of doing things going forward. It is awful, and painfully stupid.

[–] kurwa@lemmy.world 7 points 13 hours ago

Not with that attitude!

[–] Ziglin@lemmy.world 4 points 16 hours ago

Windows 11 isn't running in the cloud yet though. Unless it checks to make sure it hasn't been tampered with too much you should just be able to modify some of its binaries (the source code obviously isn't available). With the cloud based llms that is not possible.

If you have a model on your computer you can retrain it, which is like changing a binary just far less precise. The option of having a source code equivalent just isn't there beyond having the same dataset and seeds for the training program.

So I'd say it is worse than your average run of the mill proprietary software.

[–] corbindallas@fedinsfw.app 7 points 19 hours ago

You can. Just not frontier models. Check out unsloth

[–] eager_eagle@lemmy.world -5 points 18 hours ago (1 children)

lol how do you think LLMs are trained in the first place?

[–] thingsiplay@lemmy.ml 1 points 15 hours ago (1 children)

I think he (or she) is talking about the user of the LLM, not the creator.

[–] eager_eagle@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago) (1 children)

but you can, as long as it's open weight. Fine tuning and training are pretty much the same process

[–] thingsiplay@lemmy.ml 2 points 10 hours ago (1 children)

That still falls into the category "creator" to me, if you need to rebuild. I was making the distinction to an end user, comparable to applications that you download and use and configure. Instead of rebuilding the source code with your modifications.

Do I misunderstand here something? Or is this a communication issue caused by different interpretations?

[–] howrar@lemmy.ca 1 points 36 minutes ago

If you define "user" to be a set that excludes anyone capable of modifying the weights, then by definition, no user can modify the weights.

Any criticism about users being unable to modify weights becomes vacuous, so it's not an interpretation that makes sense.