this post was submitted on 16 May 2026
238 points (98.0% liked)

Programming

26951 readers
721 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

In case you missed it, ChatGPT 5.1 had a tendency to talk about "goblins" in its responses. Supposedly this was a result of training a "nerdy" personality, but it bled into the model as a whole. Because the training run for the latest model already had this flaw, they had to add specific instructions to the system prompt for their Codex coding tool to avoid this behaviour.

Here's the full prompt from their github. In fact, they repeated the goblin instructions twice, cos you know that will definitely fix it. It's an interesting read if you consider each one of these instructions were meant to prevent some undesired behaviour: https://paste.sh/Iev3HtMe#JZ4dw_CkvJcpVmjjoy7WZnSn

More info here: https://news.northeastern.edu/2026/05/06/chatgpt-goblins-problem-ai-behavior/

OpenAI's own blog post casually explaining why they couldn't predict that their state of the art model would obsess about goblins: https://openai.com/index/where-the-goblins-came-from/

you are viewing a single comment's thread
view the rest of the comments
[–] eager_eagle@lemmy.world 2 points 11 hours ago* (last edited 11 hours ago) (1 children)

but you can, as long as it's open weight. Fine tuning and training are pretty much the same process

[–] thingsiplay@lemmy.ml 2 points 11 hours ago (1 children)

That still falls into the category "creator" to me, if you need to rebuild. I was making the distinction to an end user, comparable to applications that you download and use and configure. Instead of rebuilding the source code with your modifications.

Do I misunderstand here something? Or is this a communication issue caused by different interpretations?

[–] howrar@lemmy.ca 1 points 2 hours ago (1 children)

If you define "user" to be a set that excludes anyone capable of modifying the weights, then by definition, no user can modify the weights.

Any criticism about users being unable to modify weights becomes vacuous, so it's not an interpretation that makes sense.

[–] thingsiplay@lemmy.ml 1 points 53 minutes ago

I wasn't criticizing at all. Just tried to define what I mean by creator and user. You was takling about "how do you think LLMs are trained" and I told you that the user was probably not thinking of who trains the LLMs, or fine tune them as you said. And yes, fine tuning the open weight falls into creation process, as they are rebuild. That is not the same as an end user who downloads the final usable product. And yes, it makes sense.