this post was submitted on 14 May 2025
943 points (96.5% liked)

Fuck AI

2899 readers
581 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

you are viewing a single comment's thread
view the rest of the comments
[–] jjjalljs@ttrpg.network 48 points 2 weeks ago (22 children)

I don't think AI is actually that good at summarizing. It doesn't understand the text and is prone to hallucinate. I wouldn't trust an AI summary for anything important.

Also search just seems like overkill. If I type in "population of london", i just want to be taken to a reputable site like wikipedia. I don't want a guessing machine to tell me.

Other use cases maybe. But there are so many poor uses of AI, it's hard to take any of it seriously.

[–] ArchRecord@lemm.ee -4 points 2 weeks ago* (last edited 2 weeks ago) (16 children)

I don’t think AI is actually that good at summarizing.

It really depends on the type and size of text you want it to summarize.

For instance, it'll only give you a very, very simplistic overview of a large research paper that uses technical terms, but if you want to to compress down a bullet point list, or take one paragraph and turn it into some bullet points, it'll usually do that without any issues.

Edit: I truly don't understand why I'm getting downvoted for this. LLMs are actually relatively good at summarizing small, low-context-necessary pieces of information into bullet points. They're quite literally made as code that interprets the likelihood of text based on an input. Giving it a small amount of text to rewrite or recontextualize is one of its best strengths. That's why it was originally mostly implemented as a tool to reword small isolated sections in articles, emails, and papers, before the technology was improved.

It's when they get to larger pieces of information, like meetings, books, wikipedia articles, etc, that they begin to break down, due to the nature of the technology itself. (context windows, lack of external resources that humans are able to integrate into their writing, but LLMs can't fully incorporate on the same level)

[–] jjjalljs@ttrpg.network 7 points 2 weeks ago (1 children)

But if the text you're working on is small, you could just do it yourself. You don't need an expensive guessing machine.

Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that's impressive. but also kind of a stupid waste, because I could've just tied them with my hands.

[–] ArchRecord@lemm.ee 1 points 2 weeks ago

you could just do it yourself.

Personally, I think that wholly depends on the context.

For example, if someone's having part of their email rewritten because they feel the tone was a bit off, they're usually doing that because their own attempts to do so weren't working for them, and they wanted a secondary... not exactly opinion, since it's a machine obviously, but at least an attempt that's outside whatever their brain might currently be locked into trying to do.

I know I've gotten stuck for way too long wondering why my writing felt so off, only to have someone give me a quick suggestion that cleared it all up, so I can see how this would be helpful, while also not always being something they can easily or quickly do themselves.

Also, there are legitimately just many use cases for applications using LLMs to parse small pieces of data on behalf of an application better than simple regex equations, for instance.

For example, Linkwarden, a popular open source link management software, (on an opt-in basis) uses LLMs to just automatically tag your links based on the contents of the page. When I'm importing thousands of bookmarks for the first time, even though each individual task is short to do, in terms of just looking at the link and assigning the proper tags, and is not something that takes significant mental effort on its own, I don't want to do that thousands of times if the LLM will get it done much faster with accuracy that's good enough for my use case.

I can definitely agree with you in a broader sense though, since at this point I've seen people write 2 sentence emails and short comments using AI before, using prompts even longer than the output, and that I can 100% agree is entirely pointless.

load more comments (14 replies)
load more comments (19 replies)