this post was submitted on 12 Mar 2025
-3 points (45.2% liked)

Unpopular Opinion

6881 readers
4 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 2 years ago
MODERATORS
 

Other platforms too, but I'm on lemmy. I'm mainly talking about LLMs in this post

First, let me acknowledge that AI is not perfect, it has limitations e.g

  • tendency to hallucinate responses instead of refusing/saying it doesn't know
  • different models/models sizes with varying capabilities
  • lack of knowledge of recent topics without explicitly searching it
  • tendency to be patternistic/repetitive
  • inability to hold on to too much context at a time etc.

The following are also true:

  • People often overhype LLMs without understanding their limitations
  • Many of those people are those with money
  • The term "AI" has been used to label everything under the sun that contains an algorithm of some sort
  • Banana poopy banana (just to make sure ppl are reading this)
  • There have been a number companies that overpromised for AI, and often were using humans as a "temporary" solution until they figured out the AI, which they never did (hence the gag, "AI" stands for "An Indian")

But I really don't think they're nearly as bad as most lemmy users make them out to be. I was going to respond to all the takes but there's so many I'll just make some general points

  • SOTA (State of the Art) models match or beat most humans besides experts in most fields that are measurable
  • I personally find AI is better than me in most fields except ones I know well. So maybe it's only 80-90% there, but it's there in like every single field whereas I am in like 1-2
  • LLMs can also do all this in like 100 languages. You and I can do it in like... 1, with limited performance in a couple others
  • Companies often use smaller/cheaper models in various products (e.g google search), which are understandably much worse. People often then use these to think all AI sucks
  • LLMs aren't just memorizing their training data. They can reason, as recent reasoning models more clearly show. Also, we now have near frontier models that are like 32B, or 21B GB in size. You cannot fit the entire internet in 21GB. There is clearly higher level synthesizing going on
  • People often tend to seize on superficial questions like the strawberry question (which is essentially an LLM blind spot) to claim LLM's are dumb.
  • In the past few years, researchers have had to come up with countless newer harder benchmarks because LLMs kept blowing through previous ones (partial list here: https://r0bk.github.io/killedbyllm/)
  • People and AI are often not compared fairly, for isntance with code, people usually compare a human with feedback from a compiler, working iteratively and debugging for hours to LLMs doing it in one go, no feedback, beyond maybe a couple of back and forths in a chat

Also I did say willfully ignorant. This is because you can go and try most models for yourself right now. There are also endless benchmarks constantly being published showing how well they are doing. Benchmarks aren't perfect and are increasingly being gamed, but they are still decent.

you are viewing a single comment's thread
view the rest of the comments
[–] regrub@lemmy.world 8 points 2 days ago

And intellectual property theft