A tool uses an LLM, the LLM uses a tool. What a beautiful ouroboros.
bitofhope
I don't want to live in the world of The Very Hungry Caterpillar.
I don't want to live in the world of The Giving Tree.
I don't want to live in the world of Pippi Longstocking (Sweden).
I want to live in the world of Goosebumps, The Yellow Pages, and JBL Tune Beam Quick Start Guide.
Someone ask if those fucks wanna see how much of the modern world was actually built by China? Wanna let them run it instead?
Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.
But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.
Yes, that is also the case.
Yea, what if the master owns a wrecking ball, a bulldozer, a heavy duty excavator and a bunch of dynamite?
Yes, this is a metaphor for C programming, how did you know?
You're both incorrect. I am the least fascist programmer and I'm here to tell you programming is inherently fascist.
The simultaneous problem and benefit of the stubstack thread is that a good chunk of the best posts of this community are contained within them.
It's just depressing. I don't even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.
He's absolutely wrong in thinking the LLM "knew enough about humans" to know anything at all. His "alignment" angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he's correct in saying the ethics of language models aren't a self-solving issue, even though he expresses it in critihype-laden terms.
Not that I like "handing it" to Eliezer Yudkowsky, but he's correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn't that far from this forum's reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.
I think you're deliberately setting up for this response, so: "more like human sole".
Refreshing. An online community that wears its intentions on its sleeve.
The thing about colors is they're so easy and efficient as symbols we don't even consider them as deeper elements of storytelling. The connotations of the colors black and red are so pervasive and intuitive to an English speaker it's hard to even imagine a version of My Immortal that doesn't use them to convey the emo pop mall goffik sense of aethetics associated with the fic.
Nobody living in modern anglophone society, not even someone pretending to write like a concussed 12-year old, would accidentally dress their depressed vampire goth protagonist in yellow, beige and pink thinking those colors usually represent those character traits.