28
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 27 Dec 2024
28 points (100.0% liked)
technology
23412 readers
293 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
Finally got to sign up. Last time I tried, it complained about "abnormalities in the browser", perhaps it wasn't available for Brazilian IP addresses back when I found out about it.
I found it interesting the way it tries to do "reasoning". I mean, of course LLMs can't "reason", but DeepSeek seems to build a "chain of thought", which brings interesting insights regarding the conversation.
I've been playing with it a bit too, and it's pretty impressive. Incidentally, I saw a couple of promising approaches to help with the reasoning aspect of LLMs.
The first method is called the consensus game to address the issue of models giving different answers to the same question depending on how it’s phrased. The trick here is to align the generator which answers open-ended questions, and the discriminator which evaluates multiple-choice questions. By incentivizing them to agree on answers through a scoring system, the game improves the model’s consistency and accuracy without requiring retraining. https://www.wired.com/story/game-theory-can-make-ai-more-correct-and-efficient/
The second method is to use neurosymbolic systems that combine deep learning to identify patterns in data with reasoning based on knowledge using symbolic logic. It has the potential to outperform systems relying either solely on neural networks or symbolic logic while providing clear explanations for decisions. This involves encoding symbolic knowledge into a format compatible with neural networks, and mapping data from neural patterns back to symbolic representations.
https://arxiv.org/abs/2305.00813
The neurosymbolic approach in particular looks like a very promising way to get actual reasoning to start happening in these systems. It's gonna be interesting to see where this all goes in a few years.
Although humans can reason and, therefore, reply in a more coherent manner (according to one's own cosmos which contains personality traits, knowledge, mood, etc), this phenomenon kind of also happens with humans. Depending on how multifaceted is the question/statement, a slightly different phrasing can "induce" an answer. Actually, it's a fundamental principle behind mesmerism, gas-lighting and social engineering: inducing someone to a certain reply/action/behavior/thought, sometimes relying on repetition, sometimes relying on complexity.
Artificial automatons are particularly sensible to this because of how their underlying principles are purely algorithmic. We aren't exactly algorithmic, although we have physical components of "determinism" (e.g. muscles contracting when in contact with electricity, body always seeking homeostasis, etc).
However, I understood what you meant with it. It'd be akin to a human trying to think twice/thrice when faced by complex and potentially mischievous/misleading questions/statements. "Thinking" before "acting" through consensus game.
Yeah. I see a great potential on it, too. "Signs and symbols rule the world, not words or laws" (unfortunately this Confucian quote is often misused by people, but it captures the essence of how symbols are a fundamental piece of the cosmos).
For sure, and I think it's a really important thing to keep in mind that our own logic is far from being infallible. Humans easily fall for all kinds of logical fallacies, and we find formal reasoning to be very difficult. It takes scientists years of training to develop this mindset, and they are still unable to eliminate the problem of biases and other fallacies. This is why we rely on concepts like peer review to mitigate these problems.
An artificial reasoning system should be held to a similar standard as our own reasoning instead of some ideal of rational thought. I think that the key aspects that need to be focused on is consistency, ability to explain the steps, and being able to integrate feedback to correct mistakes. If we can get that going, then we'd have systems that can improve themselves over time and that can be taught the way we teach humans.