The way rationalists use "priors" and other bayesian language is closer to how cults use jargon and special meanings to isolate members and tie them more closely to the primary information source (the cult leader). It also serves as a way to perform allegiance to the cult's ideology, which is I think what's happening here
YourNetworkIsHaunted
Grumble grumble. I don't think that "optimizing" is really a factor here, since a lot of times the preferred construct is either equivalent (such that) or more verbose (a nonzero chance that). Instead it's more likely a combination of simple repetition (like how I've been calling everyone "mate" since getting stuck into Taskmaster NZ) and identity performance (look how smart I am with my smart people words).
When optimization does factor in its less tied to the specific culture of tech/finance bros than it is a simple response to the environment and technology they're using. Like, I've seen the same "ACK" used in networking and in older radio nerds because it fills an important role.
What exactly would constitute good news about which sorts of humans ChatGPT can eat?
Maybe like with standard cannibalism they lose the ability to post after being consumed?
Maybe "storyteller" would be more accurate? Like, the prompt outputs were pretty obviously real and I can totally buy that he asked it to write an apology letter while dicking around waiting for Replit to restore a backup, but the question becomes whether he was just goofing off and playing into his role to make the story more memable or whether he was actually that naive.
Ferryman 1 calls to Gwaihir, the Lord of Eagles for aid, and The Windlord answers to fly him back across.
The downhill is honestly glorious because it seems so proud of itself when the real magic is that the boatmen can magically teleport back to the right bank under certain arcane circumstances.
Ouch. Also, I'm raging and didn't even realize I had barbarian levels.
I feel like the greatest harm that the NYT does with these stories is not ~~inflicting~~ allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger "affirmative action" angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.
Honestly I'm surprised that AI slop doesn't already fall into that category, but I guess as a community we're definitionally on the farthest fringes of AI skepticism.
From the Q&A:
Q: I feel like this is just a dressed up/fancy version of bog standard anti-AI bias, like the people who complain about how much water it uses or whatever. The best AI models are already superhuman communicators; it's crazy to claim that I shouldn't use them to pad out my prose when I'm really more an ideas person.
Wait what?
like the people who complain about how much water it uses or whatever.
I just...
or whatever.
Lol. Lmao. I laugh to not cry.
I feel like this response is still falling for the trick on some level. Of course it's going to "act contrite" and talk about how it "panicked" because it was trained on human conversations and while that no doubt included a lot of Supernatural fanfic the reinforcement learning process is going to focus on the patterns of a helpful asistant rather than a barely-caged demon. That's the role it's trying to play and the work it's cribbing the script from includes a whole lot of shitposts about solving problems with "rm -rf /"
Pretty much. Our friend up top (diz/OP) has made a slight hobby of poking the latest and greatest LLM releases with variants of these puzzles to try and explore the limitations of LLM "cognition".