I expect that programmers are going to incresingly focus on defining specifications while LLMs will handle the grunt work. Imagine declaring what the program is doing, e.g., "This API endpoint must return user data in <500ms, using ≤50MB memory, with O(n log n) complexity", and an LLM generates solutions that adhere to those rules. It could be an approach similar to the way genetic algorithms work, where LLM can try some initial solutions, then select ones that are close to the spec, and iterate until the solution works well enough.
I'd also argue that this is a natural evolution. We don’t hand-assemble machine code today, most people aren't writing stuff like sorting algorithms from scratc, and so on. I don't think it's a stretch to imagine that future devs won’t fuss with low-level logic. LLMs can be seen as "constraint solvers" akin to a chess engine, but for code. It's also worth noting that Modern tools already do this in pockets. AWS Lambda lets you define "Run this function in 1GB RAM, timeout after 15s", imagine scaling that philosophy to entire systems.