this post was submitted on 22 Jan 2025
442 points (98.5% liked)

Technology

61346 readers
2827 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy – those who understand how AI works – who are most eager to adopt it.

Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link.

top 50 comments
sorted by: hot top controversial new old
[–] venusaur@lemmy.world 72 points 1 week ago* (last edited 1 week ago) (2 children)

I think this is true for a lot of things. iPhones, Nike, Spam

[–] ogmios@sh.itjust.works 24 points 1 week ago (2 children)

The more I've learned about technology, the more hardline I've become against having it in my life.

The world is not a blank slate to paint on. Every new thing that you add to your life takes away something which used to be there in previous generations, and the consequences of such can be far reaching and unpredictable. Society as it was, was not built overnight through deliberate intention, but was hard won by millennia of blood, sweat and tears. Changing everything now on the whims of fully grown toddlers who are so wealthy that they've never even been aware of the existence of the real world is the peak of insanity.

[–] taladar@sh.itjust.works 30 points 1 week ago (1 children)

Neither the position to keep all the old solutions because they are old nor to adopt all the new solutions because they are new is sensible.

Some old solutions worked in the past and don't work anymore because the actual world around us changed (the bits outside our control, e.g. some resources might be more sparse but were more plentiful in the past, human populations are larger, the world is more interconnected,...).

Some old solutions appeared to work in the past because we didn't have the knowledge about their flaws yet but now that we do we need new ones.

Some new solutions are genuine improvements, others are merely sold by marketing and hype.

Some new solutions have studies, data or even logic and math backing them up while others are adopted on a whim or even contrary to evidence or logic.

We can not escape the fact that the world is complex and requires evaluation on a case by case basis and simplistic positions like "keep everything old" or "replace everything old" do not work.

[–] ogmios@sh.itjust.works 6 points 1 week ago

Neither the position to keep all the old solutions because they are old nor to adopt all the new solutions because they are new is sensible.

That's what really bothers me about it. I actually got an education in STEM and was really hyped to contribute to building new technologies, until I came to understand that the people leading the charge appear to be hardliners driving as forcefully as they can to implement a completely artificial world right here and now.

[–] dragonfucker@lemmy.nz 6 points 1 week ago (1 children)

The more I've learned about technology, the more hardline I've become against having it in my life.

Eventually you'll decide pottery, clothing, and agriculture need to go

[–] ogmios@sh.itjust.works 1 points 1 week ago (6 children)

They're already attacking agriculture for the existential threat of cow farts.

load more comments (6 replies)
[–] TheOneAndOnly@lemmy.world 12 points 1 week ago
[–] jaybone@lemmy.world 45 points 1 week ago (2 children)

“Surprisingly”? This should be a surprise to no one who is paying any kind of attention to any online communities where techy people post.

[–] Randelung@lemmy.world 13 points 1 week ago

Hey, buy my new CoinCoin! No, don't research what it is, just buy it!

[–] samus12345@lemm.ee 1 points 1 week ago

Most people do not pay attention to them.

[–] daniskarma@lemmy.dbzer0.com 38 points 1 week ago (4 children)

I'm tech savvy and I use AI daily.

Probably not the AI you think of. As it's not LLM or image generation.

But I have a security system self hosted using frigate, which uses AI models for image recognition.

[–] thisbenzingring 15 points 1 week ago* (last edited 1 week ago) (1 children)

I am a system admin and one of our appliances is a HPE Alletra. The AI in it is awesome and it never tries to interact with me. This is what I want. Just do your fucking job AI, I don't want you to pretend to be a person.

[–] Feathercrown@lemmy.world 2 points 1 week ago

They will come for you first /s

[–] Jax@sh.itjust.works 12 points 1 week ago (1 children)

So you're tech savvy and you use AI as it should be - like a tool. Not a magic genie that will spit out code for you.

[–] Hackworth@lemmy.world 7 points 1 week ago

As a djinn, I don't appreciate this anti-genie rhetoric.

[–] Naia@lemmy.blahaj.zone 8 points 1 week ago

Even using LLMs isn't an issue, it's just another tool. I've been messing around with local stuff and while you certainly have to use it knowing it's limitations it can help for certain things, even if just helping parse data or rephrasing things.

The issue with neural nets is that while it theoretically can do "anything", it can't actually do everything.

And it's the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.

There's also the tech bros who feel that creative works can be generated completely by AI because like AI they don't understand art or storytelling.

But we also have others who don't understand what AI is and how broad it is, thinking it's only LLMs and other neural nets that are just used to produce garbage.

[–] Feathercrown@lemmy.world 3 points 1 week ago

Image recognition has gotten crazy good

[–] cupcakezealot@lemmy.blahaj.zone 32 points 1 week ago

People susceptible to marketing gimmicks more likely to want marketing gimmick.

[–] M33 27 points 1 week ago (1 children)

« Ignorance is bliss »

  • Cypher
load more comments (1 replies)
[–] CosmoNova@lemmy.world 24 points 1 week ago (1 children)

How exactly is this a surprise to anyone when the same applied to crypto and NFTs already? AI and blockchain technologies are useful to experts in tiny niches so far but that’s not the usual tech savvy user. For the end user it’s just a toy with little use cases.

[–] Feathercrown@lemmy.world 3 points 1 week ago

AI is much more broadly applicable than Blockchain could ever be, although somehow it's still being pushed more than it should be.

[–] badbytes@lemmy.world 22 points 1 week ago (1 children)

Same is true bout hotdogs.

[–] Kolanaki@yiffit.net 13 points 1 week ago

I specifically go out of my way to eat more hotdogs knowing they are 60% pig anus.

[–] affiliate@lemmy.world 16 points 1 week ago (3 children)

i think we give silicon valley too much linguistic power. there should really be more pushback on them rebranding LLMs as AI. it’s just a bunch of marketing nonsense that we’re letting them get away with.

(i know that LLMs are studied in the field of computer science that’s known as artificial intelligence, but i really don’t think that subtlety is properly communicated to the general public.)

[–] btaf45@lemmy.world 5 points 1 week ago* (last edited 1 week ago)

here should really be more pushback on them rebranding LLMs as AI.

Those would be AI though wouldn't they?

The pushback I would like to see is the rush of companies to rebrand ordinary computer programs as "AI".

[–] UnderpantsWeevil@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

there should really be more pushback on them rebranding LLMs as AI.

That's because the target of the language is the know-nothing speculative investor class. The distinction doesn't matter to us because we're not being sold a service, we're being packaged as a product.

The increasingly-impossible-to-opt-out-of nature of LLMs/AIs illustrates as much. We're getting force-fed a "free" service that's fundamentally worse than what came before it, because its an extractive service.

[–] Feathercrown@lemmy.world 4 points 1 week ago* (last edited 1 week ago) (1 children)

I actually think in this case it's the opposite-- your expectations of the term "AI" aren't accurate to the actual research and industry usage. Now, if we want to talk about what people have been trying to pass off as "AGI"...

[–] affiliate@lemmy.world 2 points 1 week ago (1 children)

i think that’s fair point. language does work both ways, and i am certainly not in the majority with this opinion. but what bothers me is that it feels like they’re changing the definition of the word and piggybacking off of its old meaning. i know this kind of thing isn’t all that uncommon, but it still rubs me the wrong way.

[–] Feathercrown@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

I mean, we've been calling pathfinding + aimbot "AI" in games for years. The terminology certainly does feel different nowadays though...

[–] rumba@lemmy.zip 14 points 1 week ago (1 children)

I suspect it's truly more of a dunning-Kruger situation. When you know nothing You're down to use it for everything. When you start to understand the problems, limits and the morality of it, you start to back off some. And as you approach the ability to host it yourself and do actual work with it, you fully welcome the useful bits in your workflow.

[–] ifItWasUpToMe@lemmy.ca 2 points 1 week ago

This is honestly my exact experience. Albeit I’m far from an expert, but it’s great with document templates and code snippets.

[–] MonkderVierte@lemmy.ml 13 points 1 week ago

What form of AI are we talking about? Because most of them exposed to the people are glorified toys with shady business models. While tools like AlphaFold are pretty useful.

[–] Petter1@lemm.ee 8 points 1 week ago (1 children)

At the state of AI today, it helps noobs to get to average level but not help average to get a pro

[–] wondrous_strange@lemmy.world 3 points 1 week ago (2 children)

The real question in my opinion is how does a pro truly benefit from it other than being a different type of a search engine

[–] Petter1@lemm.ee 2 points 1 week ago (4 children)

Yea, if you are a pro in something it most of the time only tells you what you already know (I sometimes use it as a sort of sanity check, by writing prompts that I think I know the output that comes)

load more comments (4 replies)
load more comments (1 replies)
[–] FinishingDutch@lemmy.world 7 points 1 week ago* (last edited 1 week ago) (1 children)

That tracks for sure. The most enthusiastic guys at work also happen to be the ones who put in the least actual work. Sure, it has some uses… but the things it gets wrong are significant enough that no sane individual should rely on anything that AI is involved with making/running. The intelligence part just isn’t there yet. People are effectively getting wowed by a glorified ELIZA chat bot.

[–] UnderpantsWeevil@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

the things it gets wrong are significant enough that no sane individual should rely on anything that AI is involved with making/running

The fundamental use-cases for AI are almost never customer oriented, either. You don't see these tools deployed to reduce wait times or improve authentication or approve access, because the people who deploy them don't actually trust them to do positive scope client interactions. What you see them doing is robo-calls, front-line customer service, claims denials, and (in the bleakest use cases) military targeting operations. Instances where efficiencies of scale accrue to the operator and an error/problems rebounds to the target of the service rather than the vendor.

People are effectively getting wowed by a glorified ELIZA chat bot.

An ELIZA chatbot that double-processes your credit card and then keeps denying you a refund when you manually catch and report it.

[–] werefreeatlast@lemmy.world 6 points 1 week ago

Its like mice and traps. The stupid mice get the most bendy necks as the trap slams at high speed on them.

[–] emptiestplace@lemmy.ml 6 points 1 week ago

I am skeptical that the people they put in the "understands AI" bucket have even a bit of a clue.

load more comments
view more: next ›