producing subtly broken junk
The difference between you and people that say it's amazing is that you are capable of discerning this reality.
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Follow the wormhole through a path of communities !webdev@programming.dev
producing subtly broken junk
The difference between you and people that say it's amazing is that you are capable of discerning this reality.
What I don't get, though, is how the vibe code bros can't discern this reality.
How can they sit there and not see that their vibe-coded app just doesn't do what they wanted it to do? Eventually, you've got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn't work?
Vibe code bros aren't real programmers. They're business people, not computer people. Even if they have a CS degree, they only got that because they think it'll get them more money. They lack passion and they don't care about understanding anything. They probably don't even care about what they're generating beyond its potential to be used in a grift.
I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it'll be more lucrative for them but since they only care about money they didn't bother to actually learn the material especially since they could just vibe code through everything.
So much this.
After working in tech companies for the last 10 years I've noticed the difference between people that "generate code" and those that engineer code.
My worry about the industry is that vibe coding gives the code generators the ability to generate even more code. The engineers (even those that use vibe tools) are not engineering as much code by volume compared to "the generators".
My hope is that this is one of those "short term gain, long term pain" things that might self correct in a couple of years 🤞.
They're the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time
No, I think you do get it. That's exactly right. Everything you described is absolutely valid.
Maybe the only piece you're missing is that "almost right, but critically broken in subtle ways" turns out to actually be more than good enough for many people and many purposes. You're describing the "success" state.
/s but also not /s because this is the unfortunate reality we live in now. We're all going to eat slop and sooner or later we're going to be forced to like it.
Or maybe we will be forced to switch off LLMs and start solving the bugs introduced by their usage using our minds.
As a professional software developer, I truly hope that is the case (and I plan to charge at least 10x my current rate after the AI bubble pops when I'm looking for my next job as I expect there to be a massive shortage of people skilled enough to actually deal with the nightmare spaghetti AI code bases)
Fun times ahead.
It will be interesting (read as: bad) times to get to that point and I agree. The Junior market is basically not existent ever since coding agents appeared, stripping the industry of its future Seniors. We will be chained to our desks.
You and me both. We will be the next version of the COBOL Cowboys.
"almost right, but critically broken in subtle ways" turns out to actually be more than good enough for many people and many purposes. You're describing the "success" state.
Exactly. The consequences are at worst a problem for "future me", and at best "somebody else's problem".
AI didn't create this reality, but it's certainly moved it into the spotlight and to "center stage."
Their usual (crap) defense is:
a) you're not paying enough, so of course it is crap
b) you're not prompting right, you need to use detailed, precise language...
c) that is just anecdotal evidence, you need to do an actual study, yadda yadda.
d) it will improve...
(any other anyone has noticed?)
Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.
Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.
You also must aggressively manage all information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).
I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.
My preferred way of using LLM coders is:
Then I have it update the spec. I start a new session to have it implement. Finally review the code. If I don’t like it, undo and revisit the spec. Usually it’s because I’m trying to do too much at once. And I need to break it down into multiple specs.
The reason you kept going around in circles and reintroducing bugs you already got rid of is because LLMs don’t remember things. Every time you tell it something it tells it the entire conversation again so it has all the parts. Eventually it runs out of room and starts cutting off the beginning of the convo and now the llm can’t ‘remember’ what it was you were even talking about.
For that you can ask to update a documentation/status file on every update. You can manually add the goal and/or tasks for the future.
With that, I improved my success a lot even when starting new sessions (add in the instructions file to use this file for reference, so you don't have to remind every time)
The solutions it generated were almost write every time
Did you vibe code this post? 😂
Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it'll see warnings/hints/suggestions from the lsp
Unit tests. Unit tests. Unit tests. Unit tests.
I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.
Instead of reasoning out "it should do this" they can just run the damn test and find out.
They'll iterate on it til it actually works and then you can look at it and confirm if its good or not.
Key is having it write tests and have it iterate by itself, and also managing context in various ways. It only works on small projects in my experience. And it generates shit code that's not worth manually working on, so it kind of locks your project in to being always dependent on AI. Being always dependant on AI, and AI hitting a brick wall eventually means you'll reach a point where you can't really improve the project anymore. I.e. AI tools are nearly useless.
Have you been coding professionally long?
I find that the only time I can use these chatbots for a task I really need to already know what i'm doing so that I can read the output and fix the issues. This is like having junior devs on your team and being a code reviewer more than being a full time coder. They get a lot of things wrong but there's so much usable that you can save a ton of time over doing everything yourself from scratch.
Just like with junior devs, you can send them back to fix what you know is wrong and give them feedback to improve various things you would prefer done another way. There's no emotions though, so you can just be blunt and concise with feedback.
Nice comparison, but the bugs created by junior software developers are usually much easier to find than the bugs created by LLMs.
They get a lot of things wrong but there's so much usable that you can save a ton of time over doing everything yourself from scratch.
Your experience with Junior devs has been quite different from mine.
I work with Junior devs because someday they will be senior devs who owe me a favor, even though they've always only costed me time.
Edit: I also work with junior devs because sometimes a tiny corner of my job is both mind-numbingly boring, and also weirdly difficult to automate away.
I assign that work to junior devs because I don't want to do it.
In doing so, I am wasting the boss's money, since I could do it faster.
But I consider it but just another part of the price of hiring me, because it keeps me happy.
I rarely use LLM's for generating code. Usually, by the time I've provided all the necessary context, I might as well have just written the code myself. I do use LLM's for doing research. As long as it's understood that the response is only as accurate as the source material, they often do a decent job of distilling down to what I'm actually looking for.
I use my own brain to sketch out what I want to work and how. Before writing any code, I use the LLM to point out gaps and how to close them. Pros and cons of certain decisions. Things you would discuss with colleagues. Then, I come up with a plan for the order I want the code to be written in and how to fragment that into smaller, easy to handle modules. I supervise and review each chunk produced, adapt code mostly manually if required, write the edge case tests - most importantly, run it - and move to the next. This is how I use it successfully and get results much faster than the traditional way.
At my job though I can witness how other people use it. I was asked to review a fully vibecoded fullstack app that contains every mistake possible. Unsanitizised input. Hardcoded tokens. Hardcoded credentials. 2500+ LoC classes and functions. Business logic orchestrators masquerading as service. Full table scans on each request. Cross-tenant data leaks. Loading whole tables into the memory. No test coverage for the most critical paths. Tests requiring external services to run. The list goes on. Now they want me to make it production ready in 8 weeks "because you have AI".
My point: This was an endorphine fueled vibecoding session by someone who has no experience as developer, asked the LLM to "just make it work", lacking the ability to supervise the work that comes with experience. It was enough to make it rum locally and pitch a "system engineered w/o any developer" to management.
Those systems need guidance just as a Junior would and I am strongly and loudly advocating to restrict access to this incredibly useful tool to people who know what they do. Nobody would allow a manager to use a laser cutter in a carpentry workshop without proper training, worst case is they will burn down the whole shack.
I appreciate you having a open mind about it at least. I needed some time to adjust as well. I don't even use Opus, most of the time my workflow consistently produces usable code with Sonnet. Maybe you can try what I explained initially? Just don't try any language you're not familiar with, that will not end well.
In my experience there are three ways to be successful with this tool:
The issue with debugging is that it doesn't actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.
Edit: one of my workflows that I had success with is as follows:
.net runtime after 10 months of using and measuring where LLMs (including latest Claude models) shine reported a mindboggling success rate peaking at 75% (sic!) for changes of 1-50 LOC size - and it's for an agentic model (so you give it a prompt, context, etc, and it can run the codebase, compile it, add tests, reason, repeat from any step, etc etc).
Except it was clearly bullshit because it didn’t work.
Welcome to the LLMs where everything is hallucinated and correctness doesn't matter.
Is anyone having success with these tools
Define success.
Is there a special way to prompt it?
It gets better the more you use it, you will learn what works for you, and what does not. Right now the hot shit is "autonomous agent swarms" peddled by the token sellers as a way to output correct massive features. Do not touch that for now.
What helps with Claude / llms 101:
when it tells you something about an API, using a tool or whatever, tell it tool version and order it to give you documentation page proving the solution is possible.
when it oneshots a working solution you will get a dopamine hit. Be aware of that, as it can be addictive or make you trust it. Do not trust it, it sucks long term.
it will alwyas default to below average solution. Know where your hotspots are, and be extra judgy there.
it will get lazy and lie to you, especially with tests
it will not propose code refactors on its own.
despite the token peddlers claims, no matter if your using the 1M token context window model, the shit degrades when the context window is over 20k-30k tokens - so switch context windows often for better outcomes, but that means you will be burning more money - which obviously benefits the token peddlers.
do not trust the hype - so far any and all tall claim of a breakthrough from the token peddlers were a lie (e.g. vibing working os that can run Doom, vibing a next.js 96% replacement in a week, vibing a browser, compiler, vibing a browser jailbreak via Mythos)
Would I get better results during certain hours of the day?
Afaik USA timezone has worse performance.
Don't jump right in to coding.
Take a feature you want, and use the plan feature to break it down. Give the plan a read. Make sure you have tests covering the files it says it'll need to touch. If not, add tests (can use LLM for that as well).
Then let the LLM work. Success rates for me are around 80% or higher for medium tasks (30 mins--1 hour for me without LLM, 15--30 mins with one, including code review)
If a task is 5 mins or so, it's usually a hit or miss (since planning would take longer). For tasks longer than 1 hour or so, it depends. Sometimes the code is full of simple idioms that the LLM can easily crush it. Other times I need to actively break it down into digestible chunks
you need to fully be able to program to work with these things, in my experience.
you have to explain what you want very specifically, in precise programming terms.
i tried a preview of chatgpt codex and it’s working better than my free version of claude, but codex creates a whole virtual programming environment, you have to connect it to a github repository, then it spins up an instance with tools you include and actually tests the code and fixes bugs before sending it back to you.
but you still need to be able to find the bugs and fix them yourself.
oh and i think they work best with python, but i’ve also used ruby and dart and it’s decent.
it’s kinda like a power tool, it’ll definitely help you a lot to fix a car but if you can’t do it with wrenches it won’t help very much.
I only use AI for generating ok looking UI.
Anthropic says Methos will find bugs on FreeBSD, Bank system etc. What a bullshit.
Oh, it will 'find bugs' alright. And then flood FreeBSD's bug report system with bullshit bug reports that turn out to be nothing, but require expert human review to discern that.
It's not called "correct" coding for a reason.
That's why people are wrong so often: they feel like something is right, but don't check. That's how you get anti -vaxxers, manospere people, MAGA, QAnon, Brexit, etc.
I have a full pro model for Kiro at work. It does actually work, but we have custom MCP servers for all the internal tools, context on how to use these tools, style guidelines, etc. and then on top of that we have a lot of AI context files in the code base to help the AI understand the code base and make the correct changes.
I’ve been using it on a side project and it works if you know how to constrain it. It does get things wrong a lot. But the big thing about it is doing spec driven development where you give it a write up and it makes a requirements doc and a design doc with a lot of correctness properties in them to follow when generating and making the tasks.
I don’t believe people can vibe code unless they can actually code. It’s a whole different way of coding. I still manually edit what it does a lot.
A lot of people explain it like it’s a brand new junior developer. You need to give it as much context as possible, tell it to exactly what you want, tell it what you don’t want, tell it why, etc. and it still may not listen exactly.
The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.
I think it's pretty heavily dependent on what you're trying to do. I've gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I've spent a lot of time lately having copilot + opus write code for me. Most of what I'm doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it's a pretty good experience.
However, if I ask it to do something totally new, it does okay, more like what you've experienced. It takes a lot of hand holding, but it usually gets the job done as long as you're very descriptive in your prompt. Probably not faster than an experienced developer at the moment though
You just didn’t use the right prompts!!!!
/s
opus 4.6 is a dream for me. Though I'm in the web dev area which is quite mature and with a lot of training data. The life saver to avoid regression is to comprehensively test your code. This works as a kind of quality checkpoint during development.
Secondly, give it the right tooling and context, that means at the very least a good acp server (editor) and appropriate mcp servers. Search for what's appropriate in your domain. For 3d math, at the very least I'd think it would need a visual snapshotting tool. There are probably tons of relevant ones.
Thirdly, consistently expand on your CLAUDE.md, add and develop new skills as you go (let it write its own on your instructions). Force it to read them.
It probably depends on a lot of factors, but disciplined usage of these approaches will go a long way. Opus' context window is huge, which makes the approach more consistent.
almost write
Indeed.
Also working on some 3d maths.
I've used the free versions a bit, but not really to the extent that I'd call it vibe coding. The chat bots often know where to find libraries or pre-existing functions that I don't know. It's also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It's very hit and miss on debugging. It'll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn't pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you're doing, and test that individual steps are doing what they need to do. The bots can't really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.
I recently started using Pro to debug a problem I couldn't solve. The one thing I need from it is an extra insight, a second opinion (because I'm the only developer), and it allowing me to let it read the whole folder helps, it identified a problem I didn't consider because it's a file outside of where I was looking.
I used Opus 4.6 Extended
Stop being cheap, OP. You clearly just need to shell out multiple billions of dollars for access to mythos /s
“Almost but not quite” is exactly my experience with Claude.
The only time I’ve had real success is telling it to do a simple API change that touches a dozen files. It took a while and I’m not sure it was faster than doing it manually, but at least it was less boring.
Possibly important context: I only started really using it a few weeks ago.
You’re probably done with this. But if you give claude a test case or two (or have it try to make them) you can have claude run the test case, and then it will iterate.
Also, aggressively use plan mode and if claude screws up more than three times do /clear, explain that it’s screwing up to it and then give it new instructions.
I tried using Claude to convert some bash scripts to docker compose files, and it made several mistakes with case-sensitivity and failure to properly encapsulate certain path declarations that had spaces in them. if it could make such incredibly simple mistakes in converting a script to a markup language, I wouldn't dare trust it to actually compose anything in an actual programming language like Python or Rust or C# or Swift whatever you're using.
regress with old bugs
Have it write a test suite that enforces the correct behavior, and tell it that the test suite must pass after any change. Make sure it’s not cheating (return true) inside the test suite.