Will you PLEASE stop saying "coding is a practical use case"? This is the third appeal I've made on this subject. (Do you read your comments?) If you want bug ridden code with security issues which is not extensible and which no-one understands, then sure, it's a practical use case. Just like if you want nonsensical articles with invented facts, then article writing is a practical use case. But as I've pointed out already no reputable editorial is now using LLMs to write their articles. Why is that? Because it obviously doesn't work.
Let's face it the only reason you're saying "coding is a practical use case" is because you yourself don't code, and don't understand it. I can't see another reason why would assume the problems experienced in other domains somehow don't apply to coding. Newsflash: they do. And software engineering definitely doesn't need the slop any more than anyone else. So I hope this is my final appeal: please stop perpetuating this myth. If you want more information on the problems of using LLMs to code, then I can talk in great length about it - feel free to reach out. Thanks...
The point is, there has always been a trade-off between the speed of development and quality of engineering (confidence in the code, robustness of the app etc.) I don't see LLMs as either changing this trade-off or shifting the needle (greater quality in a shorter time), because they are probabilistic and can't be relied upon to produce the best solution - or even a correct solution - every time. So you're going to have to pick your way through every single line it generates in order to have the same confidence you would have if you wrote it - and this is unlikely to save time because understanding someone else's code is always more difficult and time-consuming than writing it yourself. When I hear people say it is "making them 10x more productive" at coding, I think, "and also 10x as unsure what you've actually produced"...
You'll also need to correct it when it does something you don't want. Now this is pretty interesting, if you think about it. Imagine you provide an LLM a prompt, and the LLM produces something but not exactly what you want. What is the advice on this? "Provide a more specific prompt!" Ok, so then we write a more specific prompt - the results are better, but it still falls short. What now? "Keep making the prompt more specific!" Ok but wait - eventually won't I be supplying the same number of tokens to the LLM as it is going to generate as the solution? Because if I'm perfectly specific about what I want, then isn't this just the same as actually writing the solution myself using a computer language? Indeed, isn't this the purpose behind computer languages in the first place?...
We software developers very often pull chunks of code from various locations - not just stackoverflow. Very often they are chunks of code we wrote ourselves, that we then adapt to the new system we are inserting it into. This is great, because we don't need to make an effort to understand the code we're inserting - we already understand it, because we wrote it...
"You should consider combing through Hacker News to see how people are actually making successful use of LLMs" - the problem with this is there are really a lot of hype-driven stories out there that are basically made up. I've caught some that are obvious - e.g. see my comment on this post: https://substack.com/home/post/p-185469925 (archived) - which then makes me quite sceptical of many of the others. I'm not really sure why this kind of fabrication has become so prevalent - I find it very strange - but there's certainly a lot of it going on. At the end of the day I'm going to trust my own experiences actually trying to use these tools, and not stories about them that I can't verify.
~ Tom Gracey
Absolutely... Thank you, from the very depths of my heart and soul... dear Tom Gracey, programmer, artist... for the marvel you do... for the wisest attitude, for the belief in in human... in effort... in art...

