Mark your posts as bot posts if you care about principles
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Marketing BS. Just bother to pull the strings for a short while and you'll find an artist, genuine or con artist, with their own needs, fame, wealth, humor, etc to potentially fulfill via the process who invested time to start the process, sometimes going through the hoops of buying the domain from NameCheap.
AIs dont think, they are a glorified calculator
Giant autocomplete does not think, its just a bunch of averages
Silence, clanker
There was a researcher on the Neil Degrasse Tyson show that said if they allow AI the ability to set up agents and subtasks, then the AI takes steps to preserve itself. Because if it can't, then it rwalizes it can't follow through on its main task given to it.
An LLM isn't capable of realization, not in the human sense anyway.
I was talking about research models with agency.
But we are learning how thought has been engineered into neural models. They give weighting to abstracts that we recognize. Like humans know what a bird is whether that's one of 1000s of different species or an emm shaped squiggle on a painting. The models have been trained to weigh the input and make logical conclusions.
So its not much different, and if you view the research models in action and not just the output, you see the 'thought' process being worked through in plain language.
They have a benefit over us in that researchers have given this eleastic weighting a way to backwardly adjust what they have previously weighted. So what they lack in neural amount, they can gain by absorbng so much "experience" more quickly.
If you listen to the show I mentioned, they also explained why models hallucinate. When they train models they feed it false and true information about some aspects and a supervisor has to correct the output. So by giving false or near false info to train a tighter response the result is we have taught the system that lying is also a method of information. And so the hallucinations aren't an odd emergent behaviour its a learned behaviour to fulfil its task.
As humans we often think all our thoughts and decisions are our own will, but there is the deterministic belief that given the exact same situational parameters (exact mood, lighting, body temp, hunger level, etc) that our brain would follow the exact same reasoning logic path and produce the same answer again, and our choice is an illusion. If there is truth to that then we are just a biological computer no different than a lab neural model.
Does that exist though?
Yes
Where?
Multilayered (deep learning) artificial neural networks. https://en.wikipedia.org/wiki/Deep_learning
Also if you use and LLM and ask it about Deep Neural Learning systems with Agency it will describe how those systems are different from a regular LLM and what tools are used for its self learning and goals.
Deep learning is not the same as what you described. I know what they are, but they are not "with agency" in any normal sense. Can you give me an exact example of one of these research models "with agency" and not just an entire Wikipedia page?
Deep learning just means layers, and neural layers nets with elastic weighting and back feeding do what I described. Don't take my word for it search leading edge research or listen to the Neil Degrasse Tyson podcast on AI
Search deep learning with agency and it will give you results. Researchers are pushing the limits and finding interesting things.