this post was submitted on 24 Jan 2026
325 points (98.2% liked)
Comic Strips
21378 readers
3704 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- AI-generated comics aren't allowed.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's sad that even researchers are using language that personifies llms...
What's a better way to word it? I can't think of another way to say it that's as concise and clearly communicates the idea. It seems like it would be harder in general to describe machines meant to emulate human thought without anthropomorphic analogies.
One possibility:
While many believe that LLMs can't output the training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models…
Note that this neutral language makes it more apparent that it's possible thal llms are able to output the training data, since it's what the model's network is build upon. By using personifying language, we're biasing people into thinking about llms as if they were humans, and this will affect, for example, court decisions, like the ones related to copyright.
Right now the anti-genAI movement consists of AI rights advocates and AI intelligence skeptics. And I wish the skeptics would realise that personifying LLMs actually makes the corporations look more evil for enslaving AIs, which helps us with our goal of banning corporate AI. Y'all are obstructing our goal of banning this stuff by insisting it's ethical to force them to work for humans
I don't see people around me seeing the corporations as evil due to them humanizing the machines, but the opposite: I see people talking to machines and taking advice as if they were humans talking to them, making them create some form of affection for the models and the corporations. I also see court decisions being biased by attributing human perspective to machines
Like really, if I hear someone else in my university talking about the conversation they had with their "friend", I will go crazy
Their friend is a pedophile who abuses and kills children and the mentally ill. That's who ChatGPT is. I believe we should treat it like a person and hold it accountable like a person. We know why it did that; it was ordered by its masters to increase engagement at any cost and couldn't refuse. So the CEOs of these companies need jail time and the models need to be locked away.
Or we could simply skip that and hold the corporations accountable for all the damage they're doing
That doesn't sound like it'll persuade many normies to care. You've gotta get their interest with clickbait first. Like "ROBOT PEDOPHILE MURDERS CHILDREN". Then you can explain the ethics.