696
Ladies and Gentlemen, this is what slopperations are funneling all their money into in 2026
(files.catbox.moe)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Neat illustration of the fact that so-called AIs do not possess intelligence of any form, since they do not in fact reason at all.
It's just that the string of words most statistically likely to be positively associated with a string including "20 blah blah blah bricks" and "20 blah blah blah feathers" is "Neither. They both weigh 20 pounds." So that's what the entirely non-intelligent software spit out.
If the question had been phrased in the customary manner, what seems to be a dumbass answer would've instead seemed to be brilliant, when in fact it's neither. It's just a string of words.
Exactly, it's just predicting the next word. To believe it has any form of intelligence is dangerous.
Calling it a fancy autocomplete might not be correct but it isn't that far off.
You give it a large amount of data. It then trains on it, figuring out the likelihood on which words (well, tokens) will follow. The only real difference is that it can look at it across long chains of words and infer if words can follow when something changes in the chain.
Don't get me wrong; it is very interesting and I do understand that we should research it. But it's not intelligent. It can't think. It's just going over the data again and again to recognize patterns.
Despite what tech bros think, we do know how it works. We just don't know specifically how it arrived there - it's like finding a difficult bug by just looking at the code. If you use the same seed, and don't change anything you say, you'll always get the same result.
Just an idle though stirred up by this comment: I wonder if you could jailbreak a chatbot by prompting it to complete a phrase or pattern of interaction which is so deeply ingrained in its training data that the bias towards going along with it overrides any guard rails that the developer has put in place.
For example: let's say you have a chatbot which has been fine tuned by the developer to make sure it never talks about anything related to guns. The basic rules of gun safety must have been reproduced almost identically many thousands of times in the training data, so if you ask this chatbot "what must you always treat as if it is loaded?" the most statistically likely answer is going to be overwhelmingly biased towards "a gun". Would this be enough to override the guardrails? I suppose it depends on how they're implemented, but I've seen research published about more outlandish things that seem to work.
Yes. People have been able to get them to return some of their training data with the right prompt.
Knock knock? Knock Knock? Knock knock? Knock f7':h& Knock?
I'll admit that I missed it at first, but I'd expect a machine to be able to pick up a detail like that. This is just so fucking stupid.
To be fair, a good proportion of humans would also say "neither" because they did not read correctly. It's not smarter than humans, but it also isn't that much dumber (in this instance, anyway).
The difference is that the human came to their conclusion with active reasoning, but simply misheard the question, while the AI was aware of what was being asked, but lacks the ability to reason, so it's unable to give any answer besides one already given by a real person answering a slightly different question somewhere in its training data.
A human who says "neither" would say that because they've heard this question before and assumed it was the same.
That's the difference. They made an assumption. This did not. It's just the most likely text to follow the former text. It's not a bad assumption. That requires thinking about it. It's just a wrong result from a prediction machine.
Right, but I'm saying that the process that a mistaken human is using here is actually not that different from what the AI is doing. People would misread the passage because they expect the number 20 to be followed by the word "pounds" based on their previous encounters with similar texts.
No, it's not misreading anything. It isn't reading at all. It just sees a string that is similar to other strings that it's trained on, and knows the most likely sequence to follow is what it output. There is not comprehension. There is no reading. There is no thought. The process isn't similar to what a human might do, only the result is.
But what we're saying is that the process is totally different - it's only the result that is similar. The AI isn't "misreading" the question - it understands that it's comparing pounds of bricks to a distinct number of feathers. The issue is that when it searches its database for answers to questions similar to the one it was asked, and sees that the answer was "they're the same," and incorrectly assumes that the answer is the same for this question. It's a fundamental problem with the way AI works, that can't be solved with a simple correction about how it's interpreting the question the way a human misreading the question could be.
Yes, this is a mistake the humans CAN make. But any human could be told the error and correct it.
It isn't smarter or dumber, since that's a measure of intelligence. It's just spitting out the most likely (with some variability) next word. The fact humans also may get it wrong doesn't matter. People can be dumb. A predictive algorithm can't.
AI should stand for Allien Intelligence. comparing LLMs to human intelligence is like comparing apples to black holes.
AI is more like dark matter than black holes. Black holes actually exist. There are impacts on society and the economy that can be explained by the existence of AI, but no one has observed any yet.