this post was submitted on 17 Feb 2026
277 points (100.0% liked)

Fuck AI

6027 readers
1579 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] gravitas_deficiency@sh.itjust.works 35 points 1 week ago (1 children)

Or how about only allow human verified accounts to post PRs? And make submissions of AI slop from human-verified accounts a permaban?

[–] Sanctus@anarchist.nexus 16 points 1 week ago (1 children)

Is that possible on Github? Wouldn't this rely on the bots identifying themselves as such? The human slop submissions makes sense, I think a little harshness is required for the time being and maybe the human bans can be lifted later.

[–] gravitas_deficiency@sh.itjust.works 12 points 1 week ago (2 children)

You can absolutely control who is allowed to make PRs on your repos. And it’d be easy to set up a process to confirm contributors are actually human

[–] Sanctus@anarchist.nexus 4 points 1 week ago (1 children)

My question is if this is easy and possible why haven't they done it? Seems a massive oversight. Maybe hit them up.

They probably weren’t inundated that badly until recently. There’s no point to automating low effort, low frequency process. It’s just that the frequency changed, and the noise factor exploded.

[–] I_Jedi@lemmy.today 1 points 1 week ago (2 children)

Insufficient. I know actual humans who use AI to write code.

[–] gravitas_deficiency@sh.itjust.works 5 points 1 week ago (1 children)

What I mean is that you can change the code of conduct to say “vibe-coded submissions will get you a permaban”

[–] I_Jedi@lemmy.today 3 points 1 week ago (1 children)

How do you prove that something is vibe-coded?

[–] gravitas_deficiency@sh.itjust.works 4 points 1 week ago (1 children)

Smaller changesets are not difficult to check directly.

Massive, sweeping changes should generally not be proposed without significant discussion, and should also have thorough explanations. Thorough explanations and similar human commentary are not hard to check for LLM-generated likelihood. Build that into the CI pipeline, and flag PRs with LLM-likeliness percentage past some threshold as requiring further review and/or moderation enforcement.

[–] I_Jedi@lemmy.today 1 points 1 week ago (1 children)

What of programmers who edit the LLM-generated code to disguise the code as human? Aka the coding version of tracing an AI image. LLM checkers may have difficulty detecting that.

I mean we’re basically talking about blocking lazily/incompetently-executed agentic edits. If a skilled dev uses an LLM as a reasonable baseline and then takes the time to go through the delta and to confirm and correct things, and then furthermore produces good commentary and discussion (as opposed to pointing your LLM at the PR with your creds and telling it to respond to comments), then I don’t think that’s a huge problem. That is, in fact, a reasonably responsible way to use LLMs for coding.

The intent here is to limit the prevalence of LLM code spam, not to eliminate any usage whatsoever of LLMs, which isn’t really achievable (for instance, many people have their IDE’s intellisense connected to an LLM to make it suggest more interesting things - that’d be effectively impossible to block).

[–] Croquette@sh.itjust.works 2 points 1 week ago

Still a good start. Better than doing nothing.