this post was submitted on 27 May 2025
2077 points (99.5% liked)
Programmer Humor
23853 readers
1808 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You know, I'd be interested to know what the critical size you can get to with that approach is before it becomes useless.
It can become pretty bad quickly, with just a small project with only 15-20 files. I've been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.
And while incredibly impressive how it's creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It'll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.
Then you try to reel it in, and it continues to go rampant. And for me, that's when I either take the wheel or roll back.
I highly recommend every programmer watch it in action.
Is there a chance that's right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn't actually have a long term memory of any kind (at least outside of the training phase).
Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit
The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.
From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.
I fell like the way investments are currently made, coming up with something new is made almost impossible. Most of the hardware is designed with LLMs in mind