this post was submitted on 20 Feb 2026
2 points (75.0% liked)
Lobste.rs
330 readers
32 users here now
RSS Feed of lobste.rs
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I find it hard to square his experience with my own. Whenever I use LLMs, admittedly only the free versions because fuck paying these scrapers, they fail miserably to write code that even just runs, let alone makes use of good coding practices, when I ask for more than one specific example of code. Broaden the prompt even a little bit and the answers I'm getting are thought-starters at best and unusable garbage at worst.
Opus 4.6 and GPT 5.3 Codex produce some amazing results if you spend a lot of time scoping, speccing and testing each stage. Just throwing it over the wall with a low-effort prompt isn't going to get you anything that's very good. But time spent on the leadup and close monitoring of the results can give you production-ready code with little tech debt in it, extremely quickly and without a lot of money (energy) spent on inference.
Now downvote away.
Just today, I asked GPT5 mini for a little side project to give me a python function that returns the access rights to a given file/folder, using smbprotocol. For me, that read as a pretty concise ask, but the results were always using functions/attributes that didn't exist.
Any time I've asked for scripts, it's been flawless and way more than I asked for, usually with switches like --host, key, auth etc. But I'm using at least Sonnet if not Opus. I'd punch out a script for you with it but I have nothing that uses SMB in my network to test on.
If you're going to use GPT, you want 5.2-Coder at least, and honestly I'm not as impressed with OpenAI's products as other people seem to be.