-28

It’s fun to say that artificial intelligence is fake and sucks — but evidence is mounting that it’s real and dangerous

top 27 comments
sorted by: hot top controversial new old
[-] conciselyverbose@sh.itjust.works 32 points 2 weeks ago

In aggregate, though, and on average, they’re usually right. It’s not impossible that the tech industry’s planned quarter-trillion dollars of spending on infrastructure to support AI next year will never pay off. But it is a signal that they have already seen something real.

The market is incredibly irrational and massive bubbles happen all the time.

The number of users when all the search engines are forcibly injecting it in every search (and hemorrhaging money to do it)? Just as dumb.

[-] macattack@lemmy.world 1 points 2 weeks ago

Any thoughts on the paragraph following your excerpt:

The most persuasive way you can demonstrate the reality of AI, though, is to describe how it is already being used today. Not in speculative sci-fi scenarios, but in everyday offices and laboratories and schoolrooms. And not in the ways that you already know — cheating on homework, drawing bad art, polluting the web — but in ones that feel surprising and new.

With that in mind, here are some things that AI has done in 2024.

[-] WhyJiffie@sh.itjust.works 5 points 2 weeks ago
[-] macattack@lemmy.world 2 points 2 weeks ago

The author from the article did. It's a bit of a stretch as are the last 2-3 pieces of the list 🤷🏾‍♂️. The first few are still pretty big.

[-] conciselyverbose@sh.itjust.works 5 points 2 weeks ago

Mostly hyping up very simple things?

LLMs don't add anything vs actively scanning for a handful of basic rules and link scanning. Anything referencing a bank that isn't on a whitelist of legitimate bank domains in a given country would likely be more effective.

The language stuff is the only parts they're actually good at.

Chatbots are genuine dogshit, PDF to podcast is genuine dogshit, poetry is genuine dogshit.

[-] macattack@lemmy.world 1 points 2 weeks ago

Respectfully, none of the aforementioned examples are simple, or else humans wouldn't have needed to leverage AI to make such substantial progress in less than 2 years.

[-] nyan@lemmy.cafe 5 points 2 weeks ago

They are simple, but they are not easy. Sorting M&Ms according to colour is also a simple task for any human with normal colour vision, but doing it with an Olympic-sized swimming pool full of M&Ms is not easy.

Computers are very good at examining data for patterns, and doing so in exhaustive detail. LLMs can detect patterns of types not visible to previous algorithms (and sometimes screw up royally and detect patterns that aren't there, or that we want to get rid of even if they exist). That doesn't make LLMs intelligent, it just makes them good tools for certain purposes. Nearly all of your examples are just applying a pattern that the algorithm has discerned—in bank records, in natural language, in sound samples, or whatever.

As for people being fooled by chatbots, that's been happening for more than fifty years. The 'bot can be exceedingly primitive, and some people will still believe it's a person because they want to believe. The fewer obvious mistakes the 'bot makes, the more lonely and vulnerable people will be willing to suspend their disbelief.

[-] macattack@lemmy.world 2 points 2 weeks ago

Do you have an example of human intelligence that doesn't rely on pattern recognition through previous experience?

[-] conciselyverbose@sh.itjust.works 4 points 2 weeks ago* (last edited 2 weeks ago)

None of the ones that actually work resemble intelligence. They're basic language skills by a tool that has no path to anything that has anything in common with intelligence. There's plenty you can do algorithmically if you're willing to lose a lot of money for every individual usage.

And again, several of them are egregious lies about shit that is actually worse than nothing.

[-] macattack@lemmy.world -1 points 2 weeks ago

At what point do you think that your opinion on AI trumps the papers and studies of researchers in those fields?

[-] conciselyverbose@sh.itjust.works 4 points 2 weeks ago* (last edited 2 weeks ago)

Actual researchers aren't the ones lying about LLMs. It's exclusively corporate people and people who have left research for corporate paychecks playing make believe that they resemble intelligence.

That said, the academic research space is also a giant mess and you should also take even peer reviewed papers with a grain of salt, because many can't be replicated and there is a good deal of actual fraud.

[-] vk6flab@lemmy.radio 17 points 2 weeks ago

I don't believe that this is the path to actual AI, but not for any of the reasons stated in the article.

The level of energy consumption alone is eye watering and unsustainable. A human can eat a banana and function for a while, in contrast, the current AI offering requirements are now getting dedicated power plants.

[-] conciselyverbose@sh.itjust.works 8 points 2 weeks ago

lol the entire hope is basically "infinite scaling" despite being way past diminishing returns multiple orders of magnitude ago.

[-] sentient_loom@sh.itjust.works 12 points 2 weeks ago

It's real and it's dangerous, but it's also fake and it sucks.

[-] cheese_greater@lemmy.world 9 points 2 weeks ago* (last edited 2 weeks ago)

I honestly doubt I would ever pay for this shit. I'll use it fine but ive noticed actual serious problematic "hallucinations" that shocked the hell out of me to the point i think it has a hopeless signal/noise problem to the point it could never be serially accurate and trusted

[-] sentient_loom@sh.itjust.works 4 points 2 weeks ago

I've had two useful applications of "AI".

One is using it to explain programming frameworks, libraries, and language features. In these cases it's sometimes wrong or outdated, but it's easy to test and check to make sure if it's right. Extremely valuable in this case! It basically just sums up what everybody already said, so it's easier and more on-point than doing a google search.

The other is writing prompts and getting it to make insane videos. In this case all I want is the hallucinations! It makes some stupid insane stuff. But the novelty wears off quick and I just don't care any more.

[-] cheese_greater@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

I will say the coding shit is good stuff ironically. But I would still have to run the code and make sure its sound. In terms of anythint citation-wise tho, its completely sus af

It has straight up made up damn citations that I could have come up with to escape interrogation during a panned 4th grade presentation to a skeptical audience

[-] sentient_loom@sh.itjust.works 2 points 2 weeks ago

But I would still have to run the code and make sure its sound.

Oh I don't get it to write code for me. I just get it to explain stuff.

[-] macattack@lemmy.world 2 points 2 weeks ago

I've been using AI to troubleshoot/learn after switching from Windows -> Linux 1.5 years ago. It has given me very poor advice occasionally, but it has taught me a lot more valuable info. This is not dissimilar to my experience following tutorials on the internet...

I honestly doubt I would ever pay for this shit.

I understand your perspective. Personally, I think that there's a chicken/egg situation where free AI versions are a subpar representation that makes skeptics view AI as a whole as over-hyped. OTOH, the people who use the better models experience the benefits first hand, but are seen as AI zealots that are having the wool pulled over there eyes.

[-] hendrik@palaver.p3x.de 6 points 2 weeks ago* (last edited 2 weeks ago)

At the moment, no one knows for sure whether the large language models that are now under development will achieve superintelligence and transform the world.

I think that's pretty much settled by now. Yes, it will transform the world. And no, the current LLMs won't ever achieve superintelligence. They have some severe limitations by design. And even worse, we're already putting in more and more data and compute into training, for less and less gain. It seems we could approach a limit soon. I'd say it's ruled out that the current approach will extend to human-level or even superintelligence territory.

[-] macattack@lemmy.world 3 points 2 weeks ago

Is super-intellignence smarter than all humans? I think where we stand now, LLMs are already smarter than the average human while lagging behind experts w/ specialized knowledge, no?

Source: https://trackingai.org/IQ

[-] echodot@feddit.uk 2 points 2 weeks ago

Isn't super intelligent more the ability to think so far beyond human limitations that it might as well be magic. The classic example being inventing faster than light drive.

Simply being very intelligent makes it more of an expert system than a super intelligence.

[-] hendrik@palaver.p3x.de 1 points 2 weeks ago* (last edited 2 weeks ago)

I think superintelligence means smarter than the (single) most intelligent human.

I've read these claims, but I'm not convinced. I tested all the ChatGPTs etc, let them write emails for me, summarize, program some software... It's way faster at generating text/images than me, but I'm sure I'm 40 IQ points more intelligent. Plus it's kind of narrow what it can do at all. ChatGPT can't even make me a sandwich or bring coffe. Et cetera. So any comparison with a human has to be on a very small set of tasks anyways, for AI to compete at all.

[-] echodot@feddit.uk 2 points 2 weeks ago

ChatGPT can't even make me a sandwich or bring coffe

Well it doesn't have physical access to reality

[-] hendrik@palaver.p3x.de 1 points 2 weeks ago* (last edited 2 weeks ago)

it doesn't have physical access to reality

Which is a severe limitation, isn't it? First of all it can't do 99% of what I can do. But I'd also attribute things like being handy to intelligence. And it can't be handy, since it has no hands. Same for sports/athletics, driving a race car which is at least a learned skill. And it has no sense if time passing. Or which hand movements are part of a process that it has read about. (Operating a coffe machine.) So I'd argue it's some kind of "book-smart" but not smart in the same way someone is, who actually experienced something.

It's a bit philosophical. But I'm not sure about distinguishing intelligence and being skillful. If it's enough to have theoretical knowledge, without the ability to apply it... Wouldn't an encyclopedia or Wikipedia also be superintelligent? I mean they sure store a lot of knowledge, they just can't do anything with it, since they're a book or website...
So I'd say intelligence has something to do with applying things, which ChatGPT can't in a lot of ways.

Ultimately I think this all goes together. But I think it's currently debated whether you need a body to become intelligent or sentient or anything. I just think intelligence isn't a very useful concept if you don't need to be able to apply it to tasks. But I'm sure we'll get to see the merge of robotics and AI in the next years/decades. And that'll make this intelligence less narrow.

[-] droopy4096@lemmy.ca 4 points 2 weeks ago

the most dangerous assumption either camp is making is that AI is and end-solution. Whre 8n fact it's just a tool. Like invented steam machines they can do a lot more than humans can but they are only ever useful as tools that humans use. Same here AI can have value as a tool to digest large chunks of data and produce some form of analysis providing humans with "another datapoint" but it's ultimately up to humans to make the decision based on available data.

[-] just_another_person@lemmy.world 1 points 2 weeks ago

It's the latest product that everyone will refuse to pay real money once they figure out how useless and stupid it really is. Same bullshit bubble, new cycle.

this post was submitted on 06 Dec 2024
-28 points (26.7% liked)

Technology

60086 readers
2264 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS