You can't compare a statistical token generator to a deterministic algorithmic program.
Programmer Humor
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
I could've sworn that I saw a headline recently that gcc isn't deterministic. But maybe that was some really weird edge case or a bug.
No, but a statistical token generator can help you create a deterministic algorithmic program quickly, if you know what you are doing.
And if you don't know what you're doing, it'll probably be a long time before you realize it, because the token generator really wants you to keep paying your subscription.
Honestly if you do know what you're doing that's still true. They're really good at looking like good code which makes it not always obvious when it's not, even to an experienced developer.
Or maybe more bluntly, they're really good at volume, not necessarily quality.
They’re really good at looking like good code which makes it not always obvious when it’s not, even to an experienced developer.
There can be a lot of difference between an experienced developer and a good/responsible developer.
Know your limits. Professional engineering has been wrestling with these problems for a long time - unfortunately the practices of professional apprenticeship, sealed drawings etc. have only informally been partially migrated into the software development world.
If you don't know what you're doing, you shouldn't be using powerful tools in the first place, whether that's heavy lift cranes, chainsaws, arc welders, or driving an SUV 80mph...
The day may come when the token generator manipulates you to keep you subscribed, but at this point in time I don't believe the frontier models are playing those games too extensively - at least not models like GPT and Claude.
Back in the 1990s I was deeply impressed that when my ISP's service started sucking, I could use their service to search for and find alternate ISPs to switch my subscription to. I wondered how long that would continue - so far, you still can - although since broadband came around much of the U.S. is locked into essentially monopoly providers of last-mile connectivity service.
Hopefully, there will be enough competition among LLM providers that subscribers continue to have choices to move to non-manipulative models.
Only that the compiler works in a defined algorithmic way that can always be expected to work, at worst it uses more cpu registers than needed or something. AI on the other hand just spews garbage in a fundamentally statistical way and despite the enormous efforts to create tools that manipulate it into working more predictably, it still sucks so much of the time.
Another difference is that you are critically thinking when "instructing" a compiler via the code, but you only convince yourself that you think critically when you're instructing an AI, it's not the same and it actively makes you a worse engineer every time you decide to use it instead of thinking.
Cognitive Surrender. I can feel it happening every time I use this employer-mandated Cursor crap. I’m fighting it as hard as I can. Every AI slop pull request I have to review makes me die a little more inside.
"Over the last few months, we have stopped getting AI slop security reports in the curl project," said Daniel Stenberg, founder and lead developer of curl, in a social media post. "They're gone. Instead, we get an ever-increasing amount of really good security reports, almost all done with the help of AI."
https://www.theregister.com/2026/04/06/ai_coding_tools_more_work/
“If you’re thinking without writing, you only think you’re thinking.”
It's not the form that's the issue. It's the required depth of that thought, when you are actually doing the work yourself you need to go all the way in your thinking otherwise it simply won't work. When you're vibe coding, your thinking only goes as far as you see necessary so the AI has enough context to give something useful in return. It's like comparing the thought process of an analyst and of the engineer that implements the spec, any engineer knows that difference.
but you only convince yourself that you think critically when you’re instructing an AI, it’s not the same and it actively makes you a worse engineer every time you decide to use it instead of thinking.
So, if you're a lumberjack and you used hand tools to chop down trees, and due to the amount of time and effort required to chop down a tree with hand tools you traditionally thought carefully about how you're cutting the tree before and while you were cutting it (not guaranteed - some lumberjacks still made stupid / unthinking cuts) - then along come gas powered chainsaws. Now you have options: you can spend more time thinking about the cuts before you make them and make even better - more careful cuts which make the process safer and easier to extract the logs after you fell them, or you can just go out and chop down 10x as many trees in the same time as before - thinking less than you did before because it's just so damn amazing how many trees you can cut down in a day now.
Software development carries less immediate risk to the software developers than arborists experience in their field work. Software development also manifests many of its risks on a longer time horizon, more difficult to predict or even link to the development work. But the choices with AI development are very much the same: are you going to take the time to understand what your powerful new tool is doing, or are you just going to fire it up and show off how fast it does what it does?
Management comes into play in both fields, and bad management pushing either kind of field workers to cut staff and "increase productivity" by arbitrary (bad) metrics has been demonstrated over and over to produce bad results.
There are no such things as "AI experts" at this time - it's far too new for that, much like "expert software systems developers" in the 1980s, the methods are just barely beginning to be developed. Applied judiciously, with proper care, it is already a powerful tool - but like they used to say even back in the early 1970s: "To Err is Human; To Really Foul Things Up Requires a Computer"
Not comparable at all. Power tools work deterministically. A powered chainsaw is not going to have a 0.1% chance of chopping a completely different tree on the other side of the forest. Of course accidents happen; your hand can slip. But a proper comparison would be if you got a computer to look at a large number of powered chainsaws and then generate its own in CAD based on what it's seen, and then you use that generated power tool. Which, for something as potentially dangerous as a powered chainsaw, you most likely wouldn't want to do, and would want to have careful human oversight over every part of design.
There's a certain amount of random in how a chainsaw operates... will a certain cut cause the chain to be thrown or not? That has a lot to do with how you use the saw, maintain the tension on and lubrication in the chain, what you're trying to cut, etc. and the same goes for the LLMs. They are based on statistics and "heat" of how far down the list of top results they choose for their answer, but you are the LLM operator, you choose what to do with its output, how much agency give it, how thoroughly you review and test its output before using it.
Inexperienced idiots would use no chain lube while cutting a 20" dbh standing hardwood with a 14" saw and just doing a straight plunge cut on the downwind / leaning to side of the tree where they will bind the bar inbetween the base and the rest of the tree if they're lucky enough to even get that far. With enough perseverence they just might drop the tree on their house. It's the same with LLMs, or traditional programming. Put the local high school chess club in charge of the ICBM targeting software? You get what you deserve.
I don't agree. LLMs are by design probabilistic. Chainsaws aren't designed to be probabilistic, and any functionality that is probabilistic (aside from philosophical questions about what it is possible to be certain about, YKWIM) is aimed to be minimised. You're supposed to be able to give the same model the same prompt twice and get two different answers. You're not meant to be able to use a chainsaw the same way on the same object and have it cut significantly differently. You're inherently leaving much more to chance by using LLMs to generate code, and creating more work for yourself as you have to review LLM code, which is generally lower quality than human-written code.
despite the enormous efforts to create tools that manipulate it into working more predictably, it still sucks so much of the time.
LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet.
I feel like this says more about neuroscience than it does about LLMs. :D
But seriously, my teams' and individual experiences with LLMs have been mixed, at best. Even with advanced prompt training, the tools are just not there yet for our work.
I looked at LLM tools for software development a year ago, it was clearly unhelpful then. Showing some inklings of promise, but "just not there for our work" yet.
I looked six months ago and the advancement was dramatic, while it was helpful sometimes and not others six months ago it was clearly improving at an impressive pace. Mind you, I've been dabbling with "AI" since the 1980s, built a software neural net in 1991 and tried to make it do something useful back then, so... obviously what we've got now is DRAMATICALLY better and faster improving that it was waaay back like that.
Over the past six months it has become solidly "better" for a lot of uses than the methods it replaces. Now, I also notice big players like Google have been "enshittifying" their previous services for a few years leading up to this, so a lot of the "good stuff" I get from AI now is just what I used to get from basic search or "voice assistant" a few years back, but even ignoring that phenomenon - the frontier models really are better than anything that came before in a lot of ways.
Also, starting six months ago, I actively engaged in learning how to use the LLM based tools - and I believe much of the improvement I have experienced is due to me learning how to use the tools better, in addition to the tools themselves improving.
I don't feel fulfilled as a programmer unless I'm inputting instructions by hand on punch cards. I refuse to work anywhere or use software that uses any higher level abstractions. It might feel faster using Assembly or using a keyboard but there's actually no productivity gains.
Guys I’ve watched a LOT of Star Trek Tee En Gee with my dad when I was growing up and he loved it a lot and he said that someday we’d have holodecks and talking computers and the engineering we do today would be child’s play but guys now I can talk to the computer and it talks back guys it says it’s my friend and I don’t understand all the code it writes at all so this must be the magic future where things are so advanced guys we made it I’m so happy I can just ask daddy to go get me the thing I always wanted but we didn’t know how to make before guys but now it can and this lets me be the BEST engineer like Joey Laugh Orge but on earth and right now instead of in space but later and it’s all thanks to this robot I can totally trust and have to pay a subscription for but the future is nows guys it’s so obvious I can’t believe I’m alive to see it happen
I watched Quantum Leap with my dad when I was a kid. You're gonna try and tell me I'm not writing this reply on Ziggy?
First, I love this analogy. At the end of the day someone is still analyzing and decomposing problems, and whether you use AI primarily to search and summarize, to recommend, or to write some goofy starter unit tests, it should still be the human writing the code.
… and now I can’t unsee this rule of three crap. Ever since I heard about an author getting busted for using AI, and all the talk about how AI generates in “rule of three”, I keep looking at my own writing and saying “wait, I do this too. People are going to think my posts are by an AI.” Every part of this post was written by a human software developer on a cell phone while I should be getting ready for work instead.
Also I feel like pointing out: assembler is the human-accessible version, where you break code into files and procedures, give things useful names, you have a symbol table that gives you the addresses where your names ended up. You can insert things and edit things and all the addresses shift around to accommodate your changes automatically. You add comments, even block comments. You “inline” methods with assembler macros.
I would say assembler is more accessible than people think, and complex programs don’t require as much of the “hold everything in your brain at once” horsepower as people think.
99%? We can get these numbers lower :-)