this post was submitted on 17 Jan 2026
936 points (99.5% liked)

Fuck AI

5446 readers
1266 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] sp3ctr4l@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago) (1 children)

Well I recently gave it a... roughly 600 line script of GDScript from a Godot project thats roughly a yearish old... and asked it to evaluate it, and then refactor it, in line with 4.6 syntax and methods, and... in total, the actual refactoring took between 5 to 10 minutes, roughly.

Its... much faster with smaller snippets or chunks.

How fast is this compared to big players?

Well... with small snippets, an online free to use LLM of some kind is much faster.

But... they generally don't have ways that you can actually do the whole custom prompt thing that I described, not for free, infinite use. So you have to keep telling them about the same silly errors they'll make.

They're useful in generating the actual prompt for your local LLM, as they're ... you know, online, and can usually rapidly pull up relevant pages that go over API, syntax, method, feature changes, then reformulate all of them into a prompt-like format.

But... while free online LLMs may be faster... they tend to have hard limits on max tokens per day and/or on the size of input you can give them at once.

So... with that 600 line script, I would have had to feed chunks of that to it, then ask it to evaluate all of it, after first giving it the whole prompt of 'these are all the relevant syntax mistakes you are going to make if i dont tell you about them first'.

So in that kind of scenario, I'd say what I am doing with the local LLM is just actually net faster, more accurate/consistent, and requires less babying/manual input from me.

Oh and I don't have to pay anyone to keep using the premium version of my entirely local LLM.

And, other than Ecosia's LLM... the data center based big boys don't tend to be capable of running off of renewable energy, though I've no actual idea if Ecosia's actually make good on that claim.