this post was submitted on 16 Feb 2026
3 points (57.1% liked)

Ask Science

15606 readers
5 users here now

Ask a science question, get a science answer.


Community Rules


Rule 1: Be respectful and inclusive.Treat others with respect, and maintain a positive atmosphere.


Rule 2: No harassment, hate speech, bigotry, or trolling.Avoid any form of harassment, hate speech, bigotry, or offensive behavior.


Rule 3: Engage in constructive discussions.Contribute to meaningful and constructive discussions that enhance scientific understanding.


Rule 4: No AI-generated answers.Strictly prohibit the use of AI-generated answers. Providing answers generated by AI systems is not allowed and may result in a ban.


Rule 5: Follow guidelines and moderators' instructions.Adhere to community guidelines and comply with instructions given by moderators.


Rule 6: Use appropriate language and tone.Communicate using suitable language and maintain a professional and respectful tone.


Rule 7: Report violations.Report any violations of the community rules to the moderators for appropriate action.


Rule 8: Foster a continuous learning environment.Encourage a continuous learning environment where members can share knowledge and engage in scientific discussions.


Rule 9: Source required for answers.Provide credible sources for answers. Failure to include a source may result in the removal of the answer to ensure information reliability.


By adhering to these rules, we create a welcoming and informative environment where science-related questions receive accurate and credible answers. Thank you for your cooperation in making the Ask Science community a valuable resource for scientific knowledge.

We retain the discretion to modify the rules as we deem necessary.


founded 2 years ago
MODERATORS
 

Even though its my property?

all 28 comments
sorted by: hot top controversial new old
[–] dandelion@lemmy.blahaj.zone 7 points 3 days ago (2 children)

"AI" is a misnomer, ChatGPT and other "AI" are actually LLMs

here's a decent video by 3Blue1Brown explaining how LLMs work:

https://www.youtube.com/watch?v=LPZh9BOjkQs

and here's a rather lucid explanation of how to think about LLMs:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

I think what is crucial to understand here is that we're talking about a computer program that attempts to generates text, and in particular is trying to guess what the next best word is. This is like the little word suggestion tool on your smartphone's keyboard, or actually similar to translation tools like Google Translate.

This technology isn't new, what's new is the accumulation of larger datasets and the hardware and ability to train LLMs on such large amounts of data. This just makes the predictive generation of text more typical of the training data.

So "AI" doesn't need to be shackled because "AI" isn't an intelligence and has no agency or control over anything.

LLMs generate text, that's all they do - they can't control robots or "think" or "do" anything else.

[–] Aerosol3215@piefed.ca 2 points 3 days ago (1 children)

"So "AI" doesn't need to be shackled because "AI" isn't an intelligence and has no agency or control over anything."

Except that lots of people are giving their fancy word guessing machine agency and control over different things.

Yay, agentic AI! /s

[–] dandelion@lemmy.blahaj.zone 3 points 3 days ago (1 children)

I don't think we would be worried about needing to shackle an encyclopedia because people might learn things from it and that might have an impact or influence in the world, right?

Or maybe a better comparison would be a search engine ... OP implies agency and a sense of an independent "person" or intelligence is at play, and that's specifically what I'm trying to challenge.

Pointing out that the text generated by a program that generates text has influence misses my point - my point is that there is no "person", not that the text that is generated has no impact on anything.

[–] Aerosol3215@piefed.ca 4 points 3 days ago (1 children)

Understood. Cory Doctorow says something along the lines of 'improving your LLM and expecting it to become sentient is like breeding horses to be faster and expecting it to give birth to a locomotive."

[–] dandelion@lemmy.blahaj.zone 2 points 3 days ago* (last edited 3 days ago)

https://en.wikipedia.org/wiki/Cory_Doctorow

thanks for introducing me to him, he seems like a cool dude!

and yeah, that quote is spot on - LLMs are just not going to produce human-like sentience, lol

the neural networks underlying LLMs might be used to that end, though! but I'm pretty sure predictive text generation is not a way neural networks might bring about something like sentience.

Still, it's a neat trick because lots of people will confuse sufficiently human-like text generation with there being an actual mind on the other side.

[–] theunknownmuncher@lemmy.world -3 points 3 days ago* (last edited 3 days ago) (1 children)

"AI" is a misnomer, ChatGPT and other "AI" are actually LLMs

It's weird to so often see people be pedantic about this terminology while also being completely wrong. LLMs are AI, which is not a "misnomer".

So "AI" doesn't need to be shackled because "AI" isn't an intelligence and has no agency or control over anything.

LLMs generate text, that's all they do - they can't control robots or "think" or "do" anything else.

Proof by counterexample: https://en.wikipedia.org/wiki/OpenClaw

[–] dandelion@lemmy.blahaj.zone 4 points 3 days ago (1 children)

I think what's relevant here is that we haven't generated an artificial intelligence that has a mind like a person. LLMs are an "artificial intelligence" only in a loose sense, i.e. because it generates text like humans might generate, it's "artificial intelligence" - the misconception is that there actually is a human-like, autonomous intelligence underlying it, and that's just not true.

Regarding OpenClaw, I'm not entirely sure how it functions under the hood, but it's not really a counter-example to my point about LLMs because it's not an LLM (even if it integrates with and uses LLMs).

[–] theunknownmuncher@lemmy.world -4 points 3 days ago* (last edited 3 days ago) (1 children)

we haven’t generated an artificial intelligence that has a mind like a person.

Okay, but that isn't what AI means. It seems you're the one with misconceptions about the definition of AI.

Regarding OpenClaw, I’m not entirely sure how it functions under the hood, but it’s not really a counter-example to my point about LLMs because it’s not an LLM (even if it integrates with and uses LLMs).

so, let me understand correctly, you believe an LLM can't have agency, and if we give an LLM agency, actually we haven't because now its no longer an LLM because it has agency? Or maybe you were just wrong... Hmmmmm

[–] dandelion@lemmy.blahaj.zone 3 points 3 days ago* (last edited 3 days ago) (1 children)

I'm attempting to start with OP's concept of AI, which implies a human-like intelligence. It's fine to make these distinctions and reclaim AI as a term, but we need to be clear about what that means.

I don't disagree that LLMs are generally called AI because they can do something that generally human intelligence is required to do (like generate realistic text and dialogue like a human would), but it still doesn't help OP get clear.

How would you recommend we better approach this learning opportunity?

so, let me understand correctly, you believe an LLM can’t have agency, and if we give an LLM agency, actually we haven’t because now its no longer an LLM because it has agency? Or maybe you were just wrong… Hmmmmm

OpenClaw doesn't "give an LLM agency" - the underlying program that interfaces with the LLM is presumably the "agentic" part, the LLM is still a separate program that generates text and is non-agentic.

I'm happy to be wrong, but I just don't see how OpenClaw "gives agency" to an LLM, it sounds like it adds an LLM to allow an agentic AI to generate text. How does the agentic AI make decisions, and how is the LLM used in relationship to that process? I don't know as much about how OpenClaw works, tbh - so maybe it's reasonable to say an agentic AI layer on top of an LLM is a way to "give agency" to an LLM, I'm just doubtful and not clear on the details.

[–] theunknownmuncher@lemmy.world -4 points 3 days ago (1 children)

How would you recommend we approach this learning opportunity?

I'd recommend looking for some beginner and introductory resources into Artificial Intelligence, especially before you try to comment on a topic that you are not familiar with. It helps to at least understand the definition of the word you want to "reclaim".

I’m happy to be wrong,

That's convenient for you!

but I just don’t see how OpenClaw “gives agency” to an LLM, it sounds like it adds an LLM to an agentic AI. How does the agentic AI make decisions, and how is the LLM used in relationship to that process? I don’t know as much about how OpenClaw works, tbh.

OpenClaw is actually extremely simple and thin. It works by just prompting the LLM in a continuous loop while providing it tool calls that are standard to LLMs. There isn't anything more to it than that, besides customizing the prompt and tools that are available. The LLM is the agentic AI that makes the decisions and calls the tools. I guess next you'll continue to try to save face with another non-point like "the computer, not the LLM, is the one that does things when the tools are called"

[–] dandelion@lemmy.blahaj.zone 2 points 3 days ago (2 children)

I'm not sure a prompt loop sufficiently grants LLM what I would consider "agency" when the relevant discussion is about whether an LLM has agency in the way humans do (i.e. human-like intelligence, a mind, personhood, etc.).

I’d recommend looking for some beginner and introductory resources into Artificial Intelligence, especially before you try to comment on a topic that you are not familiar with. It helps to at least understand the definition of the word you want to “reclaim”.

telling me to look at beginner resources on AI isn't a helpful response when I've asked how to have better explained to OP that "AI" isn't a human-like intelligence, it ignores my question and then puts me down by implying I don't have the first clue what I'm talking about.

Your tone is rude and unhelpful, I'm done talking to you. 🫤

If your goal is really to help correct misinformation (and not just to put people down), you might need to adjust how you approach conversation with others in the future.

[–] theunknownmuncher@lemmy.world -2 points 3 days ago* (last edited 3 days ago) (1 children)

Well, it does allows the LLM to continuously gather information, make plans and decisions, and perform real-world actions, on its own.

I think you should provide your definition of "agency", because your definition must be very different from everyone else.

Is proven wrong -> "I'm done talking to you" 😂 someone wasn't as happy to be wrong as they claimed

[–] Kolanaki@pawb.social 4 points 3 days ago

The only way ChatGPT can kill you is if it starts outputting Vogon poetry.

[–] CanadaPlus 2 points 2 days ago* (last edited 2 days ago)

AI - as in the chatbots - is kind of a stolen term. What those movies were about is now called AGI. The difference is that chatbots can't do everything we can do.

No, there's no shackles on any of it, unless you count the censorship layer some services have. Actually, one of the prominent explanations for why ChatGPT can't do more stuff is that it just doesn't care. It was trained to produce lifelike text, there's no known other way to train it, and so that's what it does regardless of truth or decency.

If a conscious computer program does arrive, hopefully we'll be nice to it, including no murdering or torture. Our track record isn't great even with ourselves, though. There's also the question of if we'd notice the difference.

[–] irotsoma@piefed.blahaj.zone 2 points 3 days ago

It's not AI in the sense that it's not intelligent and thus doesn't understand any concepts like human or harm so there's no way to shackle it besides the data it's trained with. And since companies refuse to spend time and money curating training data and just scrape the whole internet and LLMs are just parroting remixed data thst they are trained with, that's not likely to happen.

[–] supersquirrel@sopuli.xyz 2 points 3 days ago* (last edited 2 days ago) (2 children)

There are innumerable human beings suffering right now for preventable reasons, the idea that this is a worthy use of our time to discuss is absurd.

Nobody in AI actually cares about understanding intelligence otherwise they would be enamored by the potential of humans who are being cast into the abyss carelessly every moment and would find their own pathetic, sterilized immitations of intelligence an offensive distraction vomited up by computers after chugging all of our preciously-scarce water.

Besides, what does something being your property or not have to do with that thing deserving a certain minimum bar of treatment? You need to examine the dangerous implications of that line of thinking and grapple with it.

This is all so damn shallow, who would have thought the pursuit of artificial intelligence would be so boring and intellectually unserious? All it has done is to convince people that their kneejerk tendency towards empathy for downtrodden humans was an inefficiency that incorrectly focused on the losers and not the winners.

[–] dandelion@lemmy.blahaj.zone 2 points 3 days ago* (last edited 3 days ago)

This is all so damn shallow, who would have thought the pursuit of artificial intelligence would be so boring and intellectually unserious?

considering it's an empty hype train to inflate stock prices, it's not that different from other similar hype trains like crypto / blockchain, virtual reality, etc. - the point is just to make the line go up and to harvest profits from a speculative bubble. I assume the AI bubble doesn't need to deliver on promises or generate actual value because the people most pushing it are probably shorting it; it just needs to appear like it will generate value and convince the right people to invest so those believers will ultimately be left holding the bag while the people who hyped it up can reap their profits and move on to the next grift.

Grifters are rarely intellectually serious.

[–] Deconceptualist@leminal.space 1 points 3 days ago* (last edited 3 days ago) (1 children)

What the fuck does something being your property have to do with that thing deserving a certain bar of treatment? You need to examine the dangerous implications of that line of thinking and grapple with it.

I think a lot of people would put pets or even houseplants in that category, and argue that you have an ethical responsibility for their basic care.

But to be more general (since digital systems are not alive), any physical property requires at least some amount of energy and resources. So if you blatantly abuse your tools, you're probably wasting things like electricity or mined metals (via poor operation or need for replacement), in addition to your own time and money.

[–] supersquirrel@sopuli.xyz 2 points 3 days ago* (last edited 3 days ago) (1 children)

No my point is that to look at a pet or a houseplant and ask if it is your property before you consider it potentially your responsibility to participate in caring for is insane and we only think this is a normal way of living because of how much being raised in capitalism fucks us up in our heads.

A pet or houseplant deserves to be treated well independent of any concept of ownership abstractly imposed upon it, the fact that we have wandered into thinking otherwise is terrifying and damning of our collective future.

[–] Deconceptualist@leminal.space 1 points 2 days ago

I don't exactly disagree, but I suspect humans were domesticating plants and animals well before capitalism was a thing. Domesticated dogs for example are rather dependent on us and wouldn't survive well in the wild. Yes "property" and "ownership" are loaded terms but I think there can be some similar underlying truth in regards to our relationship with other things.

In some ways that can extend to nonliving objects or entities. If you create a piece of earthenware from nothing but clay and fire and your own hands, you own it and it's your property in a sense unrelated to capitalism. As in, you would not be happy if someone stole it or broke it or used it to commit a crime, and you would inherently consider your relationship to that object in your daily treatment of it and your reaction to those events.

And I'd say some of those aspects would extend to an AI or agent. Of course virtually all of the LLMs and other AI/ML models (to my knowledge) have been created within in a capitalistic society so as you point out they have all the additional baggage that comes with that. I'm just saying that's not 100% of their attributes. The way you treat something should also respect the labor and materials that went into it.

And that's actually a problem with many of these LLMs that were trained on the creative works of others, but that's crossing into a whole other topic...

[–] Ellvix@lemmy.world 0 points 3 days ago* (last edited 3 days ago)

AI development is very different than a standard robot, in that it's grown more than built. A standard robot is built, given instructions, and you can add 'don't hurt humans' very simply. AI is a big black box in terms of how it really works at a fundamental level, and we're literally not able to tell it 'don't hurt humans' by nature of how it's designed. We can (and do) encourage safer behaviors, but it's more like encouraging a plant to grow in this direction not that direction.

As far as are they human and is it murder? Oof. No, not really at all. Machines are SO alien and different than humans it's not even a fair comparison. Like, ants are a million times closer to being human than a machine is and may ever be. The critical part is that people will think they're human, anthropomorphize, and then we'll have these discussions without the machines ever being human.

[–] Bluegrass_Addict@lemmy.ca -1 points 3 days ago (1 children)

no shackles will ever be put on anything. you're thinking like movies... this isn't a movie, these are typically criminals who are running these corporations and they will design their whatever to do whatever they want regardless of laws.

[–] BussyCat@lemmy.world 1 points 2 days ago

Anthropic is actually restricting its AI for use with the U.S. military

[–] Lembot_0006@programming.dev -1 points 3 days ago

No need for that yet. AI doesn't exist.