this post was submitted on 14 Feb 2026
778 points (99.9% liked)

Fuck AI

5760 readers
1824 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Google accused "commercially motivated" actors of trying to clone its Gemini AI after indiscriminately scraping the web for its models.

you are viewing a single comment's thread
view the rest of the comments
[–] OwOarchist@pawb.social 1 points 16 hours ago (1 children)

(I am kind of making the assumption that their perfect, all-powerful AI, once developed, would also be a bit more efficient than current models, allowing it to more easily run on consumer-grade hardware. Also, in the meantime, consumer-grade hardware is only getting better and more powerful.)

You can ask an LLM to vibe-code you a new model from scratch, but pre-training it you’re gonna be limited by the resources you have available

Why would you ask the uber-LLM to code you a new model that hasn't been trained yet? Just ask it to give you one that already has all the training done and the weights figured out. Ask it to give you one that's ready to go, right out of the box.

[–] wonderingwanderer@sopuli.xyz 1 points 16 hours ago

once developed, would also be a bit more efficient than current models

That's not how it works though. They're not optimizing them for efficiency. The business model they're following is "just a few billion more parameters this time, and it'll gain sentiency for sure."

Which is ridiculous. AGI, even if it's possible (which is doubtful), isn't going to emerge from some highly advanced LLM.

in the meantime, consumer-grade hardware is only getting better and more powerful

There's currently a shortage of DDR5 RAM because these AI companies are buying years-worth of industrial output capacity...

Some companies are shifting away from producing consumer-grade GPUs in order to meet demand coming from commercial data centers.

It's likely we're at the peak of conventional computing, at least in terms of consumer hardware.

Why would you ask the uber-LLM to code you a new model that hasn't been trained yet? Just ask it to give you one that already has all the training done and the weights figured out. Ask it to give you one that's ready to go, right out of the box.

That's not something they're capable of. They have a context window, and none of them has one large enough to output billions of generated parameters. It can give you a python script to generate a gaussian distribution with a given number of parameters, layers, hidden sizes, and attention heads, but it can't make one that's already pre-trained.

Also, their NLP is designed to parse texts, even code, but they already struggle with mathematics. There's no way it could generate a viable weight distribution, even if it had a 12 billion token context window, because they're not designed to predict that.

You'd have to run a script to get an untrained model, and then pre-train it yourself. Or you can download a pre-trained model and fine-tune it yourself, or use as is.