1903
The dream (lemmy.world)
you are viewing a single comment's thread
view the rest of the comments
[-] BoastfulDaedra@lemmynsfw.com 140 points 6 months ago

We really need to stop calling things "AI" like it's an algorithm. There's image recognition, collective intelligence, neural networks, path finding, and pattern recognition, sure, and they've all been called AI, but functionally they have almost nothing to do with each other.

For computer scientists this year has been a sonofabitch to communicate through.

[-] CeeBee@lemmy.world 58 points 6 months ago* (last edited 6 months ago)

But "AI" is the umbrella term for all of them. What you said is the equivalent of saying:

we really need to stop calling things "vehicles". There's cars, trucks, airplanes, submarines, and space shuttles and they've all been called vehicles, but functionally they have almost nothing to do with each other

All of the things you've mentioned are correctly referred to as AI, and since most people do not understand the nuances of neural networks vs hard coded algorithms (and anything in-between), AI is an acceptable term for something that demonstrates results that comes about from a computer "thinking" and making shaved decisions.

Btw, just about every image recognition system out there is a neural network itself or has a neural network in the processing chain.

[-] benignintervention@lemmy.world -1 points 6 months ago

While this is true, I think of AI in the sci fi sense of a programmed machine intelligence rivaling human problem solving, communication, and opinion forming. Everything else to me is ML.

But like Turing thought, how can we really tell the difference

[-] Deuces@lemmy.world 2 points 6 months ago

As far as taking scifi terms for real things, at least this one is somewhat close. I'm still pissed about hover boards. And Androids are right out!

[-] MotoAsh@lemmy.world 2 points 6 months ago* (last edited 6 months ago)

Turing's question wasn't a philosophical one. It was a literal one, that he tried to answer.

What the person said is NOT true. Nobody like Turing would EVER call those things AI, because they are very specifically NOT any form of "intelligence". Fooling a layman in to mislabeling something is not the same as developing the actual thing that'd pass a Turing test.

[-] MotoAsh@lemmy.world -3 points 6 months ago* (last edited 6 months ago)

No. No AI is NOT the umbrella term for all of them.

No computer scientist will ever genuinely call basic algorithmic tasks "AI". Stop saying things you literally do not know.

We are not talking about what what the word means to normies colloquially. We're talking about what it actually means. The entire point it is a separate term from those other things.

Engineers would REALLY appreciate it if marketing morons would stop misapplying terminology just to make something sound cooler... NONE of those things are "AI". That's the fucking point. Marketing gimmicks should not get to choose our terms. (as much as they still do)

If I pull up to your house on a bicycle and tell you, "quickly, get in my vehicle so I can drive us to the store." You SHOULD look at that person weirdly: They're treating a bicycle like it's a car capable of getting on the freeway with passengers.

[-] ObviouslyNotBanana@lemmy.world 7 points 6 months ago* (last edited 6 months ago)

What I've learned as a huge nerd is that people will take a term and use it as an umbrella term for shit and they're always incorrect but there's never any point in correcting the use because that's the way the collective has decided words work and it's how they will work.

Now the collective has decided that AI is an umbrella term for executing "more complex tasks" which we cannot understand the technical workings of but need to get done.

[-] MotoAsh@lemmy.world 4 points 6 months ago

Sometimes, but there are many cases where the nerds win. Like with technology. How many times do we hear old people misuse terms because they don't care about the difference just for some young person to laugh and make fun of their lack of perspective?

I've seen it quite a lot, and I have full confidance it will happen here so long as an actual generalized intelligence comes along to show everyone the HUGE difference every nerd talks about.

[-] sukhmel@programming.dev 1 points 6 months ago

But it will be called something different so almost nobody will notice that they now should see the difference

[-] Ookami38@sh.itjust.works 1 points 6 months ago

This is in fact how common language works, and also how jargon develops. No one in this thread outside of the specific people pointing out the problem cares what it is beyond the colloquial use, keep jargon to the in group, or you'll just alienate the out-group and your entire point will be missed.

[-] TheBlackLounge@lemmy.world 6 points 6 months ago

To be fair, AI was coined to mean programs written in LISP and it changes every time new techniques are developed. It's definitely just a marketing term, but for grant money.

[-] yokonzo@lemmy.world 3 points 6 months ago

Calm down , language is fluid, you may not like it, but if enough people start using it as an umbrella term, that is what it's colloquially and eventually officially going to be soon. You can't expect to have such hard set rules this early on in the technology, it's foolish

[-] yokonzo@lemmy.world -1 points 6 months ago

Calm down , language is fluid, you may not like it, but if enough people start using it as an umbrella term, that is what it's colloquially and eventually officially going to be soon. You can't expect to have such hard set rules this early on in the technology, it's foolish

[-] schmidtster@lemmy.world -4 points 6 months ago

I like how you stole my comment and I’m downvoted.

[-] Boozilla@discuss.online 2 points 6 months ago

I think you're getting downvoted because in this thread you're coming off as an angry gatekeeper type, and internet forums tend to hate that. I'm not saying you're an actual angry gatekeeper; however, that's the vibe.

There's a lot of things in language use that annoys the crap out of me, too. I could write a long boring list of egg-corns, and words that people commonly misspell or mispronounce that really trigger me. But if I did, some dude with a PhD in linguistics would tut-tut me and tell me not to be so prescriptivist (or whatever).

Anyway, my point is, I think you're right about the "old people" misuse of AI as an umbrella term. But, also be open to the common opinion that people who police language are often seen as cranky old cranks, too.

[-] lolcatnip@reddthat.com 35 points 6 months ago

I think you're fighting a losing battle.

[-] Sterile_Technique@lemmy.world 14 points 6 months ago

You're right, but so is the previous poster. Actual AI doesn't exist yet, and when/if it does it's going to confuse the hell out of people who don't get the hype over something we've had for years.

But calling things like machine learning algorithms "AI" definitely isn't going away... we'll probably just end up making a new term for it when it actually becomes a thing... "Digital Intelligence" or something. /shrug.

[-] tegs_terry@feddit.uk 10 points 6 months ago

It isn't human-level, but you could argue it's still intelligence of a sort, just erstatz

[-] OpenStars@kbin.social 4 points 6 months ago

I dunno... I've heard that argument, but when something gives you >1000 answers, among which the correct answer might be buried somewhere, and a human is paid to dig through it and return something that looks vaguely presentable, is that really "intelligence", of any sort?

Aka, 1 + 1 = 13, which is a real result that AI can and almost certainly has recently offer(ed).

People are right to be excited about the potential that generative AI offers in the future, but we are far from that atm. Also it is vulnerable to misinformation presented in the training data - though some say that that process might even affect humans too (I know, you are shocked, right? well, hopefully not that shocked:-P).

Oh wait, nevermind I take it all back: I forgot that Steven Huffman / Elon Musk / etc. exist, and if that is considered intelligence, then AI has definitely passed that level of Turing equivalence, so you're absolutely right, erstatz it is, apparently!?

[-] tegs_terry@feddit.uk 1 points 6 months ago

What's the human digging through answers thing? I haven't heard anything about that.

[-] OpenStars@kbin.social 1 points 6 months ago

ChatGPT was caught, and I think later admitted, to not actually using fully automated processes to determine those answers, iirc. Instead, a real human would curate the answers first before they went out. That human might reject answers to a question like "Computer: what is 1+1?" ten times before finally accepting one of the given answers ("you're mother", hehe with improper apostrophe intact:-P). So really, when you were asking for an "AI answer", what you were asking was another human on the other end of that conversation!!!

Then again, I think that was a feature for an earlier version of the program, that might no longer be necessary? On the other hand, if they SAY that they aren't using human curation, but that is also what they said earlier before they admitted that they had lied, do we really believe it? Watch any video of these "tech Bros" and it's obvious in less than a minute - these people are slimy.

And to some extent it doesn't matter bc you can download some open source AI programs and run them yourself, but in general from what I understand, when people say things nowadays like "this was made from an AI", it seems like it is always a hand-picked item from among the set of answers returned. So like, "oooh" and "aaaahhhhh" and all that, that such a thing could come from AI, but it's not quite the same thing as simply asking a computer for an answer and it returning the correct answer right away! "1+1=?" giving the correct answer of 13 is MUCH less impressive when you find that out of a thousand attempts at asking, it was only returned a couple times. And the situation gets even worse(-r) when you find out that ChatGPT has been getting stupider(-est?) for awhile now - https://www.defenseone.com/technology/2023/07/ai-supposed-become-smarter-over-time-chatgpt-can-become-dumber/388826/.

[-] Ookami38@sh.itjust.works 1 points 6 months ago

So reading through your post and the article, I think you're a bit confused about the "curated response" thing. I believe what they're referring to is the user ability to give answers a "good answer" or "bad answer" flag that would then later be used for retraining. This could also explain the AIs drop in quality, of enough people are upvoting bad answers or downvoting good ones.

The article also describes "commanders" reviewing and having the code team be responsive to changing the algorithm. Again this isn't picking responses for the AI. Instead ,it's reviewing responses it's given and deciding if they're good or bad, and making changes to the algorithm to get more accurate answers in the future.

I have not heard anything like what you're describing, with real people generating the responses real time for gpt users. I'm open to being wrong, though, if you have another article.

[-] OpenStars@kbin.social 1 points 6 months ago

I might be guilty of misinformation here - perhaps it was a forerunner to ChatGPT, or even a different (competing) chatbot entirely, where they would read an answer from the machine before deciding whether to send it on to the end user, whereas the novelty of ChatGPT was in throwing off such shackles present in an older incarnation? I do recall a story along the lines that I mentioned, but I cannot find it now so that lends some credence to that thought. In any case it would have been multiple generations behind the modern ones, so you are correct that it is not so relevant anymore.

[-] tegs_terry@feddit.uk 1 points 6 months ago

There's no way that's the case now, the answers are generated way too quickly for a human to formulate. I can certainly believe it did happen at one point.

[-] OpenStars@kbin.social 1 points 6 months ago

Yes, and the fact that the quality suddenly declined awhile back - e.g. that article I linked to explained more - tracks along with those lines as well: when humans were curating the answers it took longer, whereas now the algorithm is unchained, hence able to move faster, and yet with far less accuracy than before.

[-] sukhmel@programming.dev 4 points 6 months ago

This problem was kinda solved by adding AGI term meaning "AI but not what is now AI, what we imagined AI to be"

Not going to say that this helps with confusion much 😅 and to be fair, stuff like autocomplete in office soft was called AI long time ago but it was far from LLMs of now

[-] Klear@sh.itjust.works 1 points 6 months ago

Enemies in Doom have AI. We've been calling simple algorythms in a handful lines of code AI for a long time, the trend has nothing to do with languege models etc.

[-] BoastfulDaedra@lemmynsfw.com -2 points 6 months ago

I'm not fighting, I'm just disgusted. As someone's wise grandma once said, "[BoastfulDaedra], you are not the fuckface whisperer."

[-] OpenStars@kbin.social 22 points 6 months ago

AI = "magic", or like "synergy" and other buzzwords that will soon become bereft of all meaning as a result of people abusing it.

[-] d20bard@ttrpg.network 5 points 6 months ago* (last edited 6 months ago)

Computer vision is AI. If they literally want a robot eye to scan their cluttered pantry and figure out what is there, that'll require some hefty neural net.

Edit: seeing these downvotes and surprised at the tech illiteracy on lemmy. I thought this was a better informed community. Look for computer vision papers in CVPR, IJCNN, and AAAI and try to tell me that being able to understand the 3D world isn't AI.

[-] BoastfulDaedra@lemmynsfw.com 4 points 6 months ago

You're very wrong.

Computer vision is scanning the differentials of an image and determining the statistical likelihood of two three-dimensional objects being the same base mesh from a different angle, then making a boolean decision on it. It requires a database, not a neutral net, though sometimes they are used.

A neutral net is a tool used to compare an input sequence to previous reinforced sequences and determine a likely ideal output sequence based on its training. It can be applied, carefully, for computer vision. It usually actually isn't to any significant extent; we were identifying faces from camera footage back in the 90s with no such element in sight. Computer vision is about differential geometry.

[-] danielbln@lemmy.world 5 points 6 months ago

Computer vision deals with how computers can gain high level understanding of images and videos. It involves much more than just object reconstruction. And more importantly, neural networks are a core component is just about any computer vision application since deep learning took off in the 2010s. Most computer vision is powered by some convolutional neural network or another.

Your comment contains several misconceptions and overlooks the critical role of neural networks, particularly CNNs, which are fundamental to most contemporary computer vision applications.

[-] d20bard@ttrpg.network 3 points 6 months ago

Thanks, you saved me the trouble of writing out a rant. I wonder if the other guy is actually a computer scientist or just a programmer who got a CS degree. Imagine attending a CV track at AAAI or the whole of CVPR and then saying CV isn't a sub field of AI.

[-] CobblerScholar@lemmy.world 3 points 6 months ago

There's whole countries that refer to the entire internet itself as Facebook, once something takes root it ain't going anywhere

[-] danielbln@lemmy.world 1 points 6 months ago

Language is fluid, and there is plenty of terminology that is dumb or imprecise to someone in the field, but A-ok to the wider populace. "Cloud" is also not actually a formation of water droplets, but someone's else's datacenter, but to some people the cloud is everything from Gmail to AWS.

If I say AI today and most people associate the same thing with it (these days that usually means generative AI , i.e. mostly diffusion or transformer models) then that's fine for me. Call it Plumbus for all I care.

[-] DarkNightoftheSoul@mander.xyz 1 points 6 months ago

Those are all very specific intelligences. The goal is to unite them all under a so-called general intelligence. You're right, that's the dream, but there are many steps along the way that are fairly called intelligence.

[-] DudeBro@lemm.ee 1 points 6 months ago

I imagine it's because all of these technologies combine to make a sci-fi-esque computer assistant that talks to you, and most pop culture depictions of AI are just computer assistants that talk to you. The language already existed before the technology, it already took root before we got the chance to call it anything else.

[-] schmidtster@lemmy.world -1 points 6 months ago

Shouldn’t there be a catch all term to explain the broader scope of the specifics?

Science is a broad term for multiple different studies, vehicle is a broad term for cars and trucks.

[-] can@sh.itjust.works 3 points 6 months ago
[-] schmidtster@lemmy.world 1 points 6 months ago

Is that not a type of AI already?

[-] TheGreenGolem@lemmy.dbzer0.com 0 points 6 months ago

Glorified chatbots. Tops. But definitely not something with any kind of intelligence.

[-] ParetoOptimalDev@lemmy.today 2 points 6 months ago* (last edited 6 months ago)

Yesterday I prompted gpt4 to convert a power shell script to Haskell. It did it in one shot. This happens more and more frequently for me.

I don't want to oversell llms, but you are definitely underselling them.

[-] MotoAsh@lemmy.world 0 points 6 months ago
[-] schmidtster@lemmy.world 1 points 6 months ago

So people think of programming instead?

this post was submitted on 25 Dec 2023
1903 points (97.9% liked)

People Twitter

4558 readers
3301 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying.
  5. Be excellent to each other.

founded 1 year ago
MODERATORS