this post was submitted on 16 Mar 2026
256 points (94.8% liked)

Games

24323 readers
479 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 2 years ago
MODERATORS
 

Jyk0L8eLs7jd7es.png

I'm completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it's running through an ultra realistic beauty filter.

The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace's face look "sexier" because apparently that's what realism looks like now.

I wouldn't be so baffled if this was some experimental setting they were testing, but they're advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.

you are viewing a single comment's thread
view the rest of the comments
[–] LurkingLuddite@piefed.social 23 points 16 hours ago* (last edited 16 hours ago) (3 children)

Having a number that relates words to other words is not understanding words. Stop believing the hype for fuck's sake. What they 'know' is NOT knowledge. They do not know anything. Period.

There is a reason they start to fail when trained on other slop; because they don't know what any of it means!

Their 'knowledge' comes from the basic weights of what word is most likely to follow. Period. The importance of that weight comes from humans. It is not intrinsic knowledge even after training. It is pure association, and not association like you or I do word association.

[–] Whitebrow@lemmy.world 15 points 15 hours ago

Seen a bit of a rise of those sort of people since moltbook or whatever it’s called emerged, trying to sucker people into believing the random bullshit generator is sentient or cognizant of its assets in any way.

What’s worse homie said “nu-uh” it’s not statistical probability and then proceeded to describe a statistical probability mesh.

Might help a bit if we all stop slapping the AI term on everything and start calling things what they are such as scripting, large language models, cronjobs, etc.

Trying to argue with those people just makes me sad and tired :(

[–] Lojcs@piefed.social -4 points 9 hours ago* (last edited 7 hours ago) (1 children)

Saying that an LLM knows words is not a value judgement. It doesn't mean "LLMs are sentient" or "LLMs are smart like humans". It's doesn't imply they have real world experiences. It's just a description of what they do. That word has been used to describe much more basic kinds of information / functionalities of computers already. What makes it so offensive now?

There is a reason they start to fail when trained on other slop; because they don't know what any of it means!

If you taught children slop at school they would not get far either. Although training LLMs on LLM output is more akin to getting rid of books and relying on what teachers remember to teach the students.

The importance of that weight comes from humans. It is not intrinsic knowledge even after training.

It comes from the llm and not from the outside, that's what intrinsic means. How is it not intrinsic knowledge? I think you mean to say without humans to read it, an llm's output holds no inherent value. That is true and nobody is claiming that it does. llms don't derive pleasure from talking like humans do so the only value llm output has is from the the person reading it.

Their 'knowledge' comes from the basic weights of what word is most likely to follow. It is pure association, and not association like you or I do word association.

llm weights are anything but basic, but regardless, this is also true and lunnrais said as such:

They do know the meaning of words, but only in relation to other words.

The difference between human knowledge and llm knowledge is that an llm's entire universe is words while humans understand words in relation to real world experiences. Again, nobody is claiming those two understandings are equivalent, just that they exist.

Also on the point of statistics, I think the way people understand statistics and the statistics used in llms are vastly different. It is true that an llm finds which word is most likely to be next, but how it does that is not a classical statistical method. An llm itself is a statistical model. When one says an llm 'knows' or 'understands' they mean it has captured abstract information in a incomprehensibly complex neural network not dissimilar to how we do it. How it can only use that information for word prediction doesn't change the fact that it has captured information beyond what is present in a word prediction.

It seems to me that 'statistics' is often brought up to devalue llms by associating them with basic statistics. This association is wrong as I've explained in the previous paragraph. And themselves being a statistical model doesn't mean their ability to express knowledge (although limited to textual domain) has to be inferior to a human's.

I understand the need to warn people of the limitations of llms. Their limitation is that they are text models with no concept of real life. Not that they are statistical models or copy paste machines

[–] LurkingLuddite@piefed.social 2 points 2 hours ago* (last edited 2 hours ago) (1 children)

Even simply using the word "know" is anthropomorphising them and is wholly incorrect.

You are suffering from the ELIZA effect and it is just... sad.

[–] Lojcs@piefed.social 1 points 1 hour ago* (last edited 1 hour ago)

Computers have been getting anthropomorphised for a long time. Why is it only when talking about llms that you start clutching your pearls about it? Why do you think that verb has to be exclusive to humans? To me that seems like a strange and inconsequential thing to dig your heels in.

And I struggle to see how you could genuinely believe I was suffering from 'ELIZA effect' after reading my comment. You need more nuance and less absolutism in your world view if you genuinely do.

[–] jacksilver@lemmy.world -2 points 15 hours ago (1 children)

They do build a representation of words and sequences of words and use that representation to predict what should come next.

A simplistic representation is this embedding diagram that shows how in certain vector spaces you can relate man/woman/king/queen/royal together:

The thing is, these are static representations and are only bound to the information provided to the model. Meaning there is nothing enforcing real world representations and only statistically consistent representations will be learned.

[–] LurkingLuddite@piefed.social 2 points 2 hours ago* (last edited 2 hours ago) (1 children)

They don't "learn" anything, though. They're 'trained' (still a bad term but at least the industry uses it) to spit the correct answer out.

People, especially CEOs and advertising firms, need to stop anthropomorphizing them. They do not learn. They do not "know". They have statistically derrived association and that's it. That's all.

Holy hell ELIZA effect is in full swing and it's beyond sad. They don't build the association themselves. They don't know what the representations mean. They absolutely do not know why two words are strongly associated. It's just a bunch of math that computes a path through that precomputed vector space. That's it.

[–] jacksilver@lemmy.world 2 points 1 hour ago

I didn't use the word learn, although that's really just a matter of semantics. I said they build a representation of words/sequences in a vector space to understand the interplay of words.

You can down vote me all you want, but that's literally just the math that's happening behind the scene. Whether any of that approaches something called "learning", probably not, but I'm not a neruoscientist.