this post was submitted on 14 Feb 2026
810 points (99.2% liked)
Technology
81208 readers
5965 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That poor guy, the ai is just ganging up on him
I hope it's the first proof of general AI consciousness.
what?? AI is not conscious, marketing just says that with no understanding of the maths and no legal obligation to tell the truth.
Here's how LLMs work:
The basic premise is like an autocomplete: It creates a response word by word (not literally using words, but "tokens" which are mostly words but sometimes other things such as "begin/end codeblock" or "end of response"). The program is a guessing engine that guesses the next token repeatedly. The autocomplete on your phone is different in that it merely guesses which word follows the previous word. An LLM guesses what the next word after the entire conversation (not always entire: conversation history may be truncated due to limited processing power) is.
The "training data" is used as a model of what the probabilities are of tokens following other tokens. But you can't store, for every token, how likely it is to follow every single possible combination of 1 to <big number like 65536, depends on which LLM> previous tokens. So that's what "neural networks" are for.
Neural networks are networks of mathematical "neurons". Neurons take one or more inputs from other neurons, apply a mathematical transformation to them, and output the number into one or more further neurons. At the beginning of the network are non-neurons that input the raw data into the neurons, and at the end are non-neurons that take the network's output and use it. The network is "trained" by making small adjustments to the maths of various neurons and finding the arrangement with the best results. Neural networks are very difficult to see into or debug because the mathematical nature of the system makes it pretty unclear what a given neuron does. The use of these networks in LLMs is as a way to (quite accurately) guess the probabilities on the fly without having to obtain and store training data for every single possibility.
I don't know much more than this, I just happen to have read a good article about how LLMs work. (Will edit the link into this post soon, as it was texted to me and I'm on PC rn)
I was making a joke because it seems AI intervened against the person in independent times, but thank you for your efforts.
Okay, now explain how the human neuron expresses consciousness.
Right, a question that literal neuroscientists couldn't answer.
I believe the technical term is "your brain is way more fucking complex". We have like 50 (I'm not a neuroscientist, just studied AI) chemicals being transmitted around the brain, frequently. They're used and passed on by cells which do biological and chemical things I dont understand. Ever heard of dopamine, cortisol, serotonin? AI dont got those. We have neurons that don't connect to every other neuron - only tech Bros would think that's an acceptable expression. Our brain forms literal pathways, along which it transmits those chemicals. No, a physical connection is not the same as a higher average weight, and the people who came up with AI maths in the 50s would back me up.
AI uses floating point maths to draw correlations and make inferences. More advanced AI does this more per second and has had more training. Their neurons are a programming abstraction used to explain a series of calculations and inputs, they're not actually a neuron, nor an advanced piece of tech. They're not magic.
High schoolers could study AI for a single class, then neurobiology right after and realise just how basic the AI model is when mimicking a brain. Its not even close, but I guess Sam Altman said we're approaching general intelligence so I'm probably just a hater.