697
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 20 Apr 2024
697 points (94.6% liked)
Showerthoughts
29819 readers
355 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. A showerthought should offer a unique perspective on an ordinary part of life.
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- Avoid politics
- 3.1) NEW RULE as of 5 Nov 2024, trying it out
- 3.2) Political posts often end up being circle jerks (not offering unique perspective) or enflaming (too much work for mods).
- 3.3) Try c/politicaldiscussion, volunteer as a mod here, or start your own community.
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct
founded 2 years ago
MODERATORS
You're using AI to mean AGI and LLMs to mean AI. That's on you though, everyone else knows what we're talking about.
Words have meanings. Marketing morons are not linguists.
https://www.merriam-webster.com/dictionary/artificial%20intelligence
As someone who still says a kilobyte is 1024 bytes, i agree with your sentiment.
Amen. Kibibytes my ass ;)
Words might have meanings but AI has been used by researchers to refer to toy neutral networks longer than most people on Lemmy have been alive.
This insistence that AI must refer to human type intelligence is also such a weird distortion of language. Intelligence has never been a binary, human level indicator. When people say that a dog is intelligent, or an ant hive shows signs of intelligence, they don't mean it can do what a human can. Why should AI be any different?
You honestly don't seem to understand. This is not about the extent of intelligence. This is about actual understanding. Being able to classify a logical problem / a thought into concepts and processing it based on properties of such concepts and relations to other concepts. Deep learning, as impressive as the results may appear, is not that. You just throw a training data at a few billion "switches" and flip switches until you get close enough to a desired result, without being able to predict how the outcome will be if a tiny change happens in input data.
I mean that's a problem, but it's distinct from the word "intelligence".
An intelligent dog can't classify a logic problem either, but we're still happy to call them intelligent.
With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That's true intelligence.
When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it's just that the functionality will be very limited and pretty much appear useless.
Think of it this way: deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained -> there's actually a whole sector of science looking into how to model such systems way to a solution Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to "smoothen" the results (make output more grammatically correct, filter words, translate languages)
If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.
Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.
I've given up trying to enforce the traditional definitions of "moot", "to beg the question", "nonplussed", and "literally" it's helped my mental health. A little. I suggest you do the same, it's a losing battle and the only person who gets hurt is you.
Op is an idiot though hope we can agree with that one.
Telling everyone else how they should use language is just an ultimately moronic move. After all we're not French, we don't have a central authority for how language works.
There's a difference between objecting to misuse of language and "telling everyone how they should use language" - you may not have intended it, but you used a straw man argument there.
What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.
Fascists use language to create "outgroups" which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about "Artificial Intelligence" and proceed to persuade regulators that - because there's "intelligent" systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.
Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as "AI".
Nobody has yet met this challenge:
Anyone who claims LLMs aren’t AGI should present a text processing task an AGI could accomplish that an LLM cannot.
Or if you disagree with my
Oops accidentally submitted. If someone disagrees with this as a fair challenge, let me know why.
I’ve been presenting this challenge repeatedly and in my experience it leads very quickly to the fact that nobody — especially not the experts — has a precise definition of AGI
https://arxiv.org/abs/2303.12712 has a good take on this question
While they are amazingly effective at many problems we throw at them, I'm not convinced that they're generally intelligent. What I do know is that in their current form, they are not tractable systems for anything but relatively small problems since compute and memory costs increase quadratically with the number of steps.
"Write an essay on the rise of ai and fact check it."
"Write a verifiable proof of the four colour problem"
"If p=np write a python program demonstrating this, else give me a high-level explanation why it is not true."