this post was submitted on 20 Feb 2026
2 points (75.0% liked)

Lobste.rs

329 readers
25 users here now

RSS Feed of lobste.rs

Source of the RSS Bot

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Quacksalber@sh.itjust.works 1 points 1 day ago (3 children)

Before diving in, here are my main take-aways:

AI has moved beyond writing small snippets of code and is beginning to participate in engineering large systems.
AI is crossing from local code generation into global engineering participation: CCC maintains architecture across subsystems, not just functions.
CCC has an “LLVM-like” design (as expected): training on decades of compiler engineering produces compiler architectures shaped by that history.
Our legal apparatus frequently lags behind technology progress, and AI is pushing legal boundaries. Is proprietary software cooked?
Good software depends on judgment, communication, and clear abstraction. AI has amplified this.
AI coding is automation of implementation, so design and stewardship become more important.
Manual rewrites and translation work are becoming AI-native tasks, automating a large category of engineering effort.
AI, used right, should produce better software, provided humans actually spend more energy on architecture, design, and innovation.
Architecture documentation has become infrastructure as AI systems amplify well-structured knowledge while punishing undocumented systems. 

I find it hard to square his experience with my own. Whenever I use LLMs, admittedly only the free versions because fuck paying these scrapers, they fail miserably to write code that even just runs, let alone makes use of good coding practices, when I ask for more than one specific example of code. Broaden the prompt even a little bit and the answers I'm getting are thought-starters at best and unusable garbage at worst.

[–] ikidd@lemmy.world 1 points 1 day ago (2 children)

Opus 4.6 and GPT 5.3 Codex produce some amazing results if you spend a lot of time scoping, speccing and testing each stage. Just throwing it over the wall with a low-effort prompt isn't going to get you anything that's very good. But time spent on the leadup and close monitoring of the results can give you production-ready code with little tech debt in it, extremely quickly and without a lot of money (energy) spent on inference.

Now downvote away.

[–] Quacksalber@sh.itjust.works 1 points 1 day ago (1 children)

Just today, I asked GPT5 mini for a little side project to give me a python function that returns the access rights to a given file/folder, using smbprotocol. For me, that read as a pretty concise ask, but the results were always using functions/attributes that didn't exist.

[–] ikidd@lemmy.world 1 points 1 day ago

Any time I've asked for scripts, it's been flawless and way more than I asked for, usually with switches like --host, key, auth etc. But I'm using at least Sonnet if not Opus. I'd punch out a script for you with it but I have nothing that uses SMB in my network to test on.

If you're going to use GPT, you want 5.2-Coder at least, and honestly I'm not as impressed with OpenAI's products as other people seem to be.