this post was submitted on 29 Mar 2026
35 points (76.1% liked)
Linux
13023 readers
683 users here now
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Seem to me that peeps outside of the AI development sphere/interest are not aware of how quickly 'flaws' gets fixed. There are still people that don't think AI will ever be useful - or intelligent - based on some 'archaic' performance from many months ago. Reality will hit hard I think.
Personally, I have never seen any development move faster than artificial intelligence, and whatever it can't do 'properly' today, it can do tomorrow or the day after.
Current AI/Agentic status is the clawd family of frameworks + a sota model. However, they are really stupid architectures (Every 30 minutes, the llm is yanked back and presented with the original tasks in an md file - that's it) and are WAY behind what we can do according to papers/newest development. Papers quickly trickles down to architectures tho, and the next family of agentic frameworks will strike as fast as the clawd phenomenon.
We are not far from general AI - not particularly from llms/transformers, but from the external cognitive 'harness' that are build all over. While the harness adds cognitive states to the architecture, many of the typical agentic features are being build into the model itself, so the the cognitive functionality of the harness, are being injected into the models, and the new harness fixes other 'flaws'. We will see one clawd moment after another, faster and faster, getting better and better..
I hope peeps live in a society that takes care of each other, and don't treat each other as lazy bums that "just wouldn't work hard enough". It's going to be horrible to peeps in US and similar Capitalist 'might is right' societies. There is NO safety net for 'failure' there.
Back to article: It was bound to happen within a year or so.
You mean AGI? Yeah, no. I don't believe you.
You must not be very familiar, then. We've been on diminishing returns and a plateau for several years now with no major leaps in performance, no potential answers to the flaws in LLMs, and no AI company securing a real lead over any others, or even a profitable business model for AI.
There have been a lot of inference-level tricks, like CoT to maintain coherence, MoE to make inference more cost effective, and techniques to extend the context windows; but literally no groundbreaking or foundational changes to the transformer architecture. At all. And they're still hitting the same performance and scaling constraints.
We're basically stagnant, throwing more training tokens at models, but not getting any significant gains back from it anymore.
No, we are nowhere near AGI and do not even know where to begin making gains towards it. The fundamentally different agentic framework and "cognitive harness" you describe are quite literally fantasy delusions that don't exist... did an LLM tell you about them? https://en.wikipedia.org/wiki/Chatbot_psychosis
The still can’t pass the basic tests people cooked up years ago lmao. All these companies do is optimize for benchmarks and overfit for the most egregious shortcomings. The fundamental limitations of a neural net remain. Also just asked gpt5 how many Ls are in mammalian, it said 2 lol
Source: guy who cleans up coworker’s slop code as a significant portion of their day job
Not that I don't believe you but do you have a source for any of that? Specifically about the development timeline of the "cognitive harness"?
It's not a thing... seem like the source might be https://en.wikipedia.org/wiki/Chatbot_psychosis
Prompt Engineering Is Not. Engineering, That Is.