this post was submitted on 29 Mar 2026
44 points (78.2% liked)
Linux
13023 readers
716 users here now
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You must not be very familiar, then. We've been on diminishing returns and a plateau for several years now with no major leaps in performance, no potential answers to the flaws in LLMs, and no AI company securing a real lead over any others, or even a profitable business model for AI.
There have been a lot of inference-level tricks, like CoT to maintain coherence, MoE to make inference more cost effective, and techniques to extend the context windows; but literally no groundbreaking or foundational changes to the transformer architecture. At all. And they're still hitting the same performance and scaling constraints.
We're basically stagnant, throwing more training tokens at models, but not getting any significant gains back from it anymore.
No, we are nowhere near AGI and do not even know where to begin making gains towards it. The fundamentally different agentic framework and "cognitive harness" you describe are quite literally fantasy delusions that don't exist... did an LLM tell you about them? https://en.wikipedia.org/wiki/Chatbot_psychosis