this post was submitted on 20 Feb 2026
45 points (100.0% liked)

Technology

1376 readers
112 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 7 points 14 hours ago (2 children)

Also, I'd argue that you don't actually need huge models for coding. The problem is with the way we structure code today, which is not conducive towards LLMs. Even small models that you can run locally are quite competent writing small chunks of code, say 50~100 lines or so. And any large application can be broken up into smaller isolated components.

The way I look at it is that we can view applications as state machines. For any workflow, you can draw out a state chart where you have nodes that do some computation, and then the state transitions to another node in the graph. The problem with traditional coding style is that we implicitly bake this graph into function calls. You have a piece of code that does some logic, like authenticating a user, and then it decides what code should run after that. And that creates coupling, cause now you have to trace through code to figure out what the data flow actually is. This is difficult for agents to do because it causes context to quickly grow in unbounded way, leading to context rot. When an LLM has too much data in its context, it doesn't really know what's important and what to focus on, so it ends up going off the rails.

But now, let's imagine that we do inversion of control here. Instead of having each node in the state graph call each other, why not pull that logic out. We could pass a data structure around that each node gets as its input, it does some work, and then returns a new state. A separate conductor component manages the workflow and inspects the state and decides which edge of the graph to take.

The graph can be visually inspected, and it becomes easy for the human to tell what the business logic is doing. The graphs don't really have a lot of data in them either because they're declarative. They're decoupled from the actual implementation details that live in the logic of each node.

Going back to the user authentication example. The handler could get a parsed HTTP request, try to look up the user in the db, check if the session token is present, etc. Then update the state to add a user or set a flag stating that user wasn't found, or wasn't authenticated. Then the conductor can look at the result, and decide to either move on to the next step, or call the error handler.

Now we basically have a bunch of tiny programs that know nothing about one another, and the agent working on each one has a fixed context that doesn't grow in unbounded fashion. On top of that, we can have validation boundaries between each node, so the LLM can check that the component produces correct output, handles whatever side effects it needs to do correctly, and so on. Testing becomes much simpler too, cause now you don't need to load the whole app, you can just test each component to make sure it fills its contract correctly.

What's more is that each workflow can be treated as a node in a bigger workflow, so the whole thing becomes composable. And the nodes themselves are like reusable Lego blocks, since the context is passed in to them.

And this idea isn't new, workflow engines have been around for a long time. The reason they don't really catch on for general purpose programming is because it doesn't feel natural to code in that way. There's a lot of ceremony involved in creating these workflow definitions, writing contracts for them, and jumping between that and the implementation for the nodes. But the equation changes when we're dealing with LLMs, they have no problem doing tedious tasks like that, and all the ceremony helps keep them on track.

I would wager that moving towards this style programming would be a far more effective way to use these tools, and that current crops of LLMs is more than good enough for that.

[–] shreditdude0@lemmygrad.ml 5 points 7 hours ago (1 children)

It's amazing that we were taught about finite state automata and machines, yet for programming large applications, we haven't necessarily taken that same approach. As you mentioned, all of the coupling and created dependencies of function calls upon other function calls really gets impossibly difficult to debug and analyze as the complexity, functionality, and scale of an application grows. After reading what you wrote, it's like a new revelation; I'm eager to put this paradigm into practice. I've generally lost interest in many of my personal projects simply for the fact that they become so untenable as they've grown with dependency upon dependency. The notion of splitting up the code into their proper segments or nodes that do some work or a program of instructions, return their product to a central conductor via a state-tracking data structure is something I plan on using moving forward. Perhaps, I'll even revisit some of these abandoned projects and try to view the flow of execution under this lens and then restructure operations accordingly.

[–] yogthos@lemmygrad.ml 2 points 17 minutes ago

I've been using this pattern in some large production projects, and it's been a real life saver for me. Like you said, once the code gets large, it's just too hard to keep track of everything cause it overflows what you can keep in your head effectively. And at that point you just start guessing when you make decisions which inevitably leads to weird bugs. The other huge benefit is it makes it far easier to deal with changing requirements. If you have a graph of steps you're doing, it's trivial to add, remove, or rearrange steps. You can visually inspect it, and guarantee that the new workflow is doing what you want it to.

[–] PoY@lemmygrad.ml 3 points 7 hours ago (1 children)

a guy i worked with at my last job is forming a company doing something akin to this.. he's been working on it for 6 months or so. he's a math wizard and believes he can get this to work in a way that can be mathematically proven but to be honest when he talks about it it all goes way over my head pretty quickly.

i hope you're both right because it would be great to not have to wrestle models to do things and if you can give a prompt for what you want done and the orchestrator can break it down into workable tasks and then pass those tasks out to agents to do, check, verify, in a way that is reliable it will be a game changer for sure.

the downside will be most of IT jobs will be gone pretty quickly

[–] yogthos@lemmygrad.ml 1 points 8 minutes ago

I think you'll still need a human in the loop because only a human can decide whether the code is doing what's intended or not. The nature of the job is going to change dramatically though. My prediction is that the focus will be on making declarative specifications that act as a contract for the LLM. There are also types of features that are very difficult to specify and verify formally. Anything dealing with side effects or external systems is a good example. We have good tools to formally prove data consistency using type systems and provers, but real world applications have to deal with outside world to do anything useful. So, it's most likely that the human will just work at a higher level actually focusing on what the application is doing in a semantic sense, while the agents handle the underlying implementation details.