267
Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game
(www.digitalfoundry.net)
Video game news oriented community. No NanoUFO is not a bot :)
Posts.
Comments.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities:
Saying that an LLM knows words is not a value judgement. It doesn't mean "LLMs are sentient" or "LLMs are smart like humans". It's doesn't imply they have real world experiences. It's just a description of what they do. That word has been used to describe much more basic kinds of information / functionalities of computers already. What makes it so offensive now?
If you taught children slop at school they would not get far either. Although training LLMs on LLM output is more akin to getting rid of books and relying on what teachers remember to teach the students.
It comes from the llm and not from the outside, that's what intrinsic means. How is it not intrinsic knowledge? I think you mean to say without humans to read it, an llm's output holds no inherent value. That is true and nobody is claiming that it does. llms don't derive pleasure from talking like humans do so the only value llm output has is from the the person reading it.
llm weights are anything but basic, but regardless, this is also true and lunnrais said as such:
The difference between human knowledge and llm knowledge is that an llm's entire universe is words while humans understand words in relation to real world experiences. Again, nobody is claiming those two understandings are equivalent, just that they exist.
Also on the point of statistics, I think the way people understand statistics and the statistics used in llms are vastly different. It is true that an llm finds which word is most likely to be next, but how it does that is not a classical statistical method. An llm itself is a statistical model. When one says an llm 'knows' or 'understands' they mean it has captured abstract information in a incomprehensibly complex neural network not dissimilar to how we do it. How it can only use that information for word prediction doesn't change the fact that it has captured information beyond what is present in a word prediction.
It seems to me that 'statistics' is often brought up to devalue llms by associating them with basic statistics. This association is wrong as I've explained in the previous paragraph. And themselves being a statistical model doesn't mean their ability to express knowledge (although limited to textual domain) has to be inferior to a human's.
I understand the need to warn people of the limitations of llms. Their limitation is that they are text models with no concept of real life. Not that they are statistical models or copy paste machines
Even simply using the word "know" is anthropomorphising them and is wholly incorrect.
You are suffering from the ELIZA effect and it is just... sad.
Computers have been getting anthropomorphised for a long time. Why is it only when talking about llms that you start clutching your pearls about it? Why do you think that verb has to be exclusive to humans? To me that seems like a strange and inconsequential thing to dig your heels in.
And I struggle to see how you could genuinely believe I was suffering from 'ELIZA effect' after reading my comment. You need more nuance and less absolutism in your world view if you genuinely do.
Your eagerness to fool yourself is beyond sad.