864
submitted 2 months ago by grid11@lemy.nl to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] iAvicenna@lemmy.world 48 points 2 months ago* (last edited 2 months ago)

FMO is the best explanation of this psychosis and then of course denial by people who became heavily invested in it. Stuff like LLMs or ConvNets (and the likes) can already be used to do some pretty amazing stuff that we could not do a decade ago, there is really no need to shit rainbows and puke glitter all over it. I am also not against exploring and pushing the boundaries, but when you explore a boundary while pretending like you have already crossed it, that is how you get bubbles. And this again all boils down to appeasing some cancerous billionaire shareholders so they funnel down some money to your pockets.

[-] AngryCommieKender@lemmy.world 7 points 2 months ago

there is really no need shit rainbows and puke glitter all over it

I'm now picturing the unicorn from the Squatty Potty commercial, with violent diarrhea and vomiting.

[-] utopiah@lemmy.world 5 points 2 months ago

Stuff like LLMs or ConvNets (and the likes) can already be used to do some pretty amazing stuff that we could not do a decade ago, there is really no need to shit rainbows and puke glitter all over it.

I'm shitting rainbows and puking glitter on a daily basis BUT it's not against AI as a field, it's not against AI research, rather it's against :

  • catastrophism and fear, even eschatology, used as a marketing tactic
  • open systems and research that become close
  • trying to lock a market with legislation
  • people who use a model, especially a model they don't even have e.g using a proprietary API, and claim they are an AI startup
  • C-levels decision that anything now must include AI
  • claims that this or that skill is soon to be replaced by AI with actually no proof of it
  • meaningless test results with grand claim like "passing the bar exam" used as marketing tactics
  • claims that it scales, it "just needs more data", not for .1% improvement but for radical change, e.g emergent learning
  • for-profit (different from public research) scrapping datasets without paying back anything to actual creators
  • ignoring or lying about non renewable resource consumption for both training and inference
  • relying on "free" or loss leader strategies to dominate a market
  • promoting to be doing the work for the good of humanity then signing exclusive partnership with a corporation already fined for monopoly practices

I'm sure I'm forgetting a few but basically none of those criticism are technical. None of those criticism is about the current progress made. Rather, they are about business practices.

this post was submitted on 06 Sep 2024
864 points (97.4% liked)

Technology

59710 readers
2005 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS