Ive started solely referring to them as LLM
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
The term “Artificial Intelligence” has historically been used by computer scientists to refer to any “decision making” program of any complexity, even something extremely simple, like solving a maze by following the left wall.
I like LLMbeciles myself.
I like calling them "Bullshit Generators", because that's what they actually are.
I like calling them regurgitative idiots, or artificial idiots, though really anything that makes fun of them works
No
Exhibit A people are beginning to describe empty, hollow mass produced corporate slop as AI, it has become an adjective to describe worthless trash and I love it.
Yes we say Fuck AI, but when we see it in the wild we call it slop, bot, clanker, or vibe coded, etc.
And starting splitting hairs about naming is very geeky but it doesn't help, as 90% of people have very little concept about what AI or LLM's are in the first place.
90% of people have very little concept about what AI or LLM’s are in the first place.
Yeah I mean I agree, I think that's why there needs to be a term that describes them.
AI has a very broad definition. Their products are AI.
But even so- surely you don't believe that Generative AI programs and Hal 9000 are functionally identical? I just think it would be helpful to have a word that doesn't lump those things together.
Well, according to the broad definition, a Google search or recommendation systems like those on Netflix or Instagram would also be considered AI. And we don't call them that, but rather by their proper name.
And language shouldn't be underestimated. It has a profound impact on our thinking, feeling, and actions. Many people associate AI with intelligence and "human thinking". That alone is enough to mislead many, because the usefulness of the technology in a given application is no longer questioned. After all, it's "intelligent". However, when "LLM" is used, a lot more people wouldn't grant it intelligence or one might be more inclined to ask whether a language model, for example in Excel, is truly useful. After all, that's exactly what it is: a model of our language. Not more, not less.
And we don't call them that
i have seen people do
a Google search or recommendation systems like those on Netflix or Instagram would also be considered AI
Yes, correct.
I disagree with this post and with Stallman.
LLMs are AI. What people are actually confused about is what AI is and what the difference between AI and AGI is.
There is no universal definition for AI, but multiple definitions which are mostly very similar: AI is the ability of a software system to perform tasks that typically would involve human intelligence like learning, problem solving, decision making, etc. Since the basic idea is basically that artificial intelligence imitates human intelligence, we would need a universal definition of human intelligence - which we don't have.
Since this definition is rather broad, there is an additional classification: ANI, artificial narrow intelligence, or weak AI, is an intelligence inferior to human intelligence, which operates purely rule-based and for specific, narrow use cases. This is what LLMs, self-driving cars, assistants like Siri or Alexa fall into. AGI, artificial general intelligence, or strong AI, is an intelligence equal to or comparable to human intelligence, which operates autonomously, based on its perception and knowledge. It can transfer past knowledge to new situations, and learn. It's a theoretical construct, that we have not achieved yet, and no one knows when or if we will even achieve that, and unfortunately also one of the first things people think about when AI is mentioned. ASI, artificial super intelligence, is basically an AGI but with an intelligence that is superior to a human in all aspects. It's basically the apex predator of all AI, it's better, smarter, faster in anything than a human could ever be. Even more theoretical.
Saying LLMs are not AI is plain wrong, and if our goal is a realistic, proper way of working with AI, we shouldn't be doing the same as the tech bros.
If I'm reading correctly it sounds like you do agree with Stallman's main point that a casual distinction is needed, you just disagree on the word itself ("ANI" vs "generator").
No, I think the distinction is already made and there are words for that. Adding additional terms like "generators" or "pretend intelligence" does not help in creating clarity. In my opinion, the current definitions/classifications are enough. I get Stallman's point, and his definition of intelligence seems to be different from how I would define intelligence, which is probably the main disagreement.
I definitely would call a LLM intelligent. Even though it does not understand the context like a human could do, it is intelligent enough to create an answer that is correct. Doing this by basically pure stochastics is pretty intelligent in my books. My car's driving assistant, even if it's not fully self driving, is pretty damn intelligent and understands the situation I'm in, adapting speed, understanding signs, reacting to what other drivers do. I definitely would call that intelligent. Is it human-like intelligence? Absolutely not. But for this specific, narrow use-case it does work pretty damn good.
His main point seems to be breaking the hype, but I do not think that it will or can be achieved like that. This will not convince the tech bros or investors. People who are simply uninformed, will not understand an even more abstract concept.
In my opinion, we should educate people more on where the hype is actually coming from: NVIDIA. Personally, I hate Jensen Huang, but he's been doing a terrific job as a CEO for NVIDIA, unfortunately. They've positioned themselves as a hardware supplier and infrastructure layer for the core component for AI, and are investing/partnering widely into AI providers, hyperscalers, other component suppliers in a circle of cashflow. Any investment they do, they get back multiplied, which also boosts all other related entities. The only thing that went "10x" as promised by AI is NVIDIA stock. They are bringing capex to a whole new level currently.
And that's what we should be discussing more, instead of clinging to words. Every word that any company claims about AI should automatically be assumed to be a lie, especially for any AI claim from any hyperscaler, AI provider, hardware supplier, and especially-especially from NVIDIA. Every single claim they do directly relates to revenue. Every positive claim is revenue. Every negative word is loss. In this circle of money they are running - we're talking about thousands of billions USD. People have done way worse, for way less money.
someone said to call it "computer rendered anonymized plagiarism" so i have that in my clipboard.
This is why I call chatbots "LLMs" and refer to image and video generators as "slop generators". It isn't AI, a software can't be intelligent.
It's just a word. It's more important to let people know what this is about and any terms that may be more "accurate" won't do that.
I never liked the "just a word" defense. If any word can be made to mean anything else just because a government or corporation says so, what does that say for our shared reality?
Sure, I don't disagree. But what use is any of that if people don't know what exactly you are protesting? At what point do you abandon idealism in favour of pragmatism?
I support Stallman's take. I think just saying "Fuck AI" is going to have almost zero effect on the world. I think we need to add nuance, reasoning, be accurate... Tell people WHY that is, so we can educate them. Or convince them to do something... Understand how these things work and why that's good or bad to form an opinion... "Fuck AI" alone isn't going to do any of that.
"Slop Constructors" is what I call them. It's good to remember that calling them "AI" helps with the fake hype.
Standard disclaimer: I do not want to grow up to be like Stallman.
That said, every time I have thought that Stallman was too pedantic about terminology and the risks involved, I have been wrong, so far.
I think it’s AI. The artificial part is key. There’s no real intelligence there, just like there’s no real grass in an artificial lawn.
The term "AI" has been used for decades to refer to a broad spectrum of things, often times including algorithms that had nothing to do with machine learning or inference.
Technically, what most of us have a problem with isn't "AI" as a whole, but just LLMs and how companies are trying to replace people with them. I agree that people should be specific, as there's a lot of practical application for machine learning and AI that has nothing to do with LLMs.
But you're not going to get anywhere by trying to change the words people use for these things. We saw a similar thing happen with "smart" home automation devices, and before that it was people complaining about "smartphones" not being actually "smart". But both of those terms are still in common use.
I don't think you'll convince anyone by trying to police the terminology for technical accuracy. The focus should be on the specific problems and harmful effects of how the technology is being used
Gartner refers to it as GenAI
What's a problem, make people realise how stupid this artifical intelligence is.
Fuck LLMs
Spicy autocorrect
I like orphan crushing machine best.
It's the popular term, at the end, the meaning doesn't really matter as long as everybody has the same agreement on what we're talking about.
Don't get too attached to cientific meaning of things.
"AI" was around way before LLMs, and they were used for good stuff, like discovering new proteins and amino acids among many other specialized uses.
I would say there are different categories of AI and I disagree with the statement that LLMs are not AI.
All LLMs are AI, but not all AI are LLMs.
LLMs are trash.