Ask Science
Ask a science question, get a science answer.
Community Rules
Rule 1: Be respectful and inclusive.
Treat others with respect, and maintain a positive atmosphere.
Rule 2: No harassment, hate speech, bigotry, or trolling.
Avoid any form of harassment, hate speech, bigotry, or offensive behavior.
Rule 3: Engage in constructive discussions.
Contribute to meaningful and constructive discussions that enhance scientific understanding.
Rule 4: No AI-generated answers.
Strictly prohibit the use of AI-generated answers. Providing answers generated by AI systems is not allowed and may result in a ban.
Rule 5: Follow guidelines and moderators' instructions.
Adhere to community guidelines and comply with instructions given by moderators.
Rule 6: Use appropriate language and tone.
Communicate using suitable language and maintain a professional and respectful tone.
Rule 7: Report violations.
Report any violations of the community rules to the moderators for appropriate action.
Rule 8: Foster a continuous learning environment.
Encourage a continuous learning environment where members can share knowledge and engage in scientific discussions.
Rule 9: Source required for answers.
Provide credible sources for answers. Failure to include a source may result in the removal of the answer to ensure information reliability.
By adhering to these rules, we create a welcoming and informative environment where science-related questions receive accurate and credible answers. Thank you for your cooperation in making the Ask Science community a valuable resource for scientific knowledge.
We retain the discretion to modify the rules as we deem necessary.
view the rest of the comments
"AI" is a misnomer, ChatGPT and other "AI" are actually LLMs
here's a decent video by 3Blue1Brown explaining how LLMs work:
https://www.youtube.com/watch?v=LPZh9BOjkQs
and here's a rather lucid explanation of how to think about LLMs:
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
I think what is crucial to understand here is that we're talking about a computer program that attempts to generates text, and in particular is trying to guess what the next best word is. This is like the little word suggestion tool on your smartphone's keyboard, or actually similar to translation tools like Google Translate.
This technology isn't new, what's new is the accumulation of larger datasets and the hardware and ability to train LLMs on such large amounts of data. This just makes the predictive generation of text more typical of the training data.
So "AI" doesn't need to be shackled because "AI" isn't an intelligence and has no agency or control over anything.
LLMs generate text, that's all they do - they can't control robots or "think" or "do" anything else.
"So "AI" doesn't need to be shackled because "AI" isn't an intelligence and has no agency or control over anything."
Except that lots of people are giving their fancy word guessing machine agency and control over different things.
Yay, agentic AI! /s
I don't think we would be worried about needing to shackle an encyclopedia because people might learn things from it and that might have an impact or influence in the world, right?
Or maybe a better comparison would be a search engine ... OP implies agency and a sense of an independent "person" or intelligence is at play, and that's specifically what I'm trying to challenge.
Pointing out that the text generated by a program that generates text has influence misses my point - my point is that there is no "person", not that the text that is generated has no impact on anything.
Understood. Cory Doctorow says something along the lines of 'improving your LLM and expecting it to become sentient is like breeding horses to be faster and expecting it to give birth to a locomotive."
https://en.wikipedia.org/wiki/Cory_Doctorow
thanks for introducing me to him, he seems like a cool dude!
and yeah, that quote is spot on - LLMs are just not going to produce human-like sentience, lol
the neural networks underlying LLMs might be used to that end, though! but I'm pretty sure predictive text generation is not a way neural networks might bring about something like sentience.
Still, it's a neat trick because lots of people will confuse sufficiently human-like text generation with there being an actual mind on the other side.
It's weird to so often see people be pedantic about this terminology while also being completely wrong. LLMs are AI, which is not a "misnomer".
Proof by counterexample: https://en.wikipedia.org/wiki/OpenClaw
I think what's relevant here is that we haven't generated an artificial intelligence that has a mind like a person. LLMs are an "artificial intelligence" only in a loose sense, i.e. because it generates text like humans might generate, it's "artificial intelligence" - the misconception is that there actually is a human-like, autonomous intelligence underlying it, and that's just not true.
Regarding OpenClaw, I'm not entirely sure how it functions under the hood, but it's not really a counter-example to my point about LLMs because it's not an LLM (even if it integrates with and uses LLMs).
Okay, but that isn't what AI means. It seems you're the one with misconceptions about the definition of AI.
so, let me understand correctly, you believe an LLM can't have agency, and if we give an LLM agency, actually we haven't because now its no longer an LLM because it has agency? Or maybe you were just wrong... Hmmmmm
I'm attempting to start with OP's concept of AI, which implies a human-like intelligence. It's fine to make these distinctions and reclaim AI as a term, but we need to be clear about what that means.
I don't disagree that LLMs are generally called AI because they can do something that generally human intelligence is required to do (like generate realistic text and dialogue like a human would), but it still doesn't help OP get clear.
How would you recommend we better approach this learning opportunity?
OpenClaw doesn't "give an LLM agency" - the underlying program that interfaces with the LLM is presumably the "agentic" part, the LLM is still a separate program that generates text and is non-agentic.
I'm happy to be wrong, but I just don't see how OpenClaw "gives agency" to an LLM, it sounds like it adds an LLM to allow an agentic AI to generate text. How does the agentic AI make decisions, and how is the LLM used in relationship to that process? I don't know as much about how OpenClaw works, tbh - so maybe it's reasonable to say an agentic AI layer on top of an LLM is a way to "give agency" to an LLM, I'm just doubtful and not clear on the details.
I'd recommend looking for some beginner and introductory resources into Artificial Intelligence, especially before you try to comment on a topic that you are not familiar with. It helps to at least understand the definition of the word you want to "reclaim".
That's convenient for you!
OpenClaw is actually extremely simple and thin. It works by just prompting the LLM in a continuous loop while providing it tool calls that are standard to LLMs. There isn't anything more to it than that, besides customizing the prompt and tools that are available. The LLM is the agentic AI that makes the decisions and calls the tools. I guess next you'll continue to try to save face with another non-point like "the computer, not the LLM, is the one that does things when the tools are called"
I'm not sure a prompt loop sufficiently grants LLM what I would consider "agency" when the relevant discussion is about whether an LLM has agency in the way humans do (i.e. human-like intelligence, a mind, personhood, etc.).
telling me to look at beginner resources on AI isn't a helpful response when I've asked how to have better explained to OP that "AI" isn't a human-like intelligence, it ignores my question and then puts me down by implying I don't have the first clue what I'm talking about.
Your tone is rude and unhelpful, I'm done talking to you. 🫤
If your goal is really to help correct misinformation (and not just to put people down), you might need to adjust how you approach conversation with others in the future.
Huh? I don't use OpenClaw.
Well, it does allows the LLM to continuously gather information, make plans and decisions, and perform real-world actions, on its own.
I think you should provide your definition of "agency", because your definition must be very different from everyone else.
Is proven wrong -> "I'm done talking to you" 😂 someone wasn't as happy to be wrong as they claimed
Well you tried, but actually I'm still correct.
Yeah, if your argument is really "OpenClaw is not agentic AI", here you go: 🤡
And the funniest part is that yes, OpenClaw of course begins to act on its own when you spin it up, without any user prompting... you thought you had me there 😂
Perhaps you should have a basic understanding about what you are commenting on, before you comment? Especially if you're going to make personal attacks over it. Those really backfired on you now lol