No shit. This isn't new.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:
π«’
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
Agreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.
Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.
I think it's important to note (i'm not an llm I know that phrase triggers you to assume I am) that they haven't proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don't and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don't? That's the big question that needs answered. It's still possible that we just haven't properly incentivized reason over memorization during training.
if someone can objectively answer "no" to that, the bubble collapses.
Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
It's all "one instruction at a time" regardless of high processor speeds and words like "intelligent" being bandied about. "Reason" discussions should fall into the same query bucket as "sentience".
XD so, like a regular school/university student that just wants to get passing grades?
You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.
We also reward people who can memorize and regurgitate even if they don't understand what they are doing.
Of course, that is obvious to all having basic knowledge of neural networks, no?