this post was submitted on 11 Apr 2026
128 points (88.6% liked)

Programming

26481 readers
213 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

...and I still don't get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn't work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn't until I had a full night's sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would "fix" the bug, and provide a confident explanation of what was wrong... Except it was clearly bullshit because it didn't work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

top 50 comments
sorted by: hot top controversial new old
[–] TBi@lemmy.world 2 points 7 hours ago

You just didn’t use the right prompts!!!!

/s

[–] rosco385@lemmy.wtf 15 points 16 hours ago

The solutions it generated were almost write every time

Did you vibe code this post? 😂

[–] CCMan1701A@startrek.website 2 points 11 hours ago

I use AI for researching what existing software or projects exist to help my build up my system that I then suffer through making.

[–] zbyte64@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

In my experience there are three ways to be successful with this tool:

  • write something that already exists so it doesn't need to think
  • do all the thinking for it upfront (hello waterfall development)
  • work in very small iterations that doesn't require any leaps of logic. Don't reprompt when it gets something wrong, instead reshape the code so it can only get it right

The issue with debugging is that it doesn't actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.

Edit: one of my workflows that I had success with is as follows:

  • write a gherkin feature file describing desired functionality, maybe have the LLM create multiple scenarios after I defined one to copy from
  • tell the LLM to write tests using those feature files, does an okay job but needs help making tests run in parallel.
  • if the feature is simple, ask the LLM to make a plan and review it
  • if the feature is complex then stub out the implementation in code and add TODOs, then direct the LLM to plan. Giving explicit goals in the code itself reduces token consumption and yield better plans
[–] spartanatreyu@programming.dev 2 points 21 hours ago (1 children)

write something that already exists so it doesn’t need to think

If something already exists, it shouldn't need to be rewritten.

Doing otherwise is a sign that something has gone wrong.

That was the case before LLMs and it is still the case today.

[–] CCMan1701A@startrek.website 1 points 11 hours ago

What they mean is rewrite something that has a LICENSE my company can't use.

[–] thirstyhyena@lemmy.world 4 points 23 hours ago

I recently started using Pro to debug a problem I couldn't solve. The one thing I need from it is an extra insight, a second opinion (because I'm the only developer), and it allowing me to let it read the whole folder helps, it identified a problem I didn't consider because it's a file outside of where I was looking.

[–] drmoose@lemmy.world 3 points 21 hours ago* (last edited 21 hours ago)

It's a tool that you need to learn. Try some of claude.md files people share online for your programming area as a starter. You still need to review what it does but just asking for it to create tests as it creates code does a lot to improve output.

[–] Flames5123@sh.itjust.works 6 points 1 day ago

I have a full pro model for Kiro at work. It does actually work, but we have custom MCP servers for all the internal tools, context on how to use these tools, style guidelines, etc. and then on top of that we have a lot of AI context files in the code base to help the AI understand the code base and make the correct changes.

I’ve been using it on a side project and it works if you know how to constrain it. It does get things wrong a lot. But the big thing about it is doing spec driven development where you give it a write up and it makes a requirements doc and a design doc with a lot of correctness properties in them to follow when generating and making the tasks.

I don’t believe people can vibe code unless they can actually code. It’s a whole different way of coding. I still manually edit what it does a lot.

A lot of people explain it like it’s a brand new junior developer. You need to give it as much context as possible, tell it to exactly what you want, tell it what you don’t want, tell it why, etc. and it still may not listen exactly.

[–] ozymandias@sh.itjust.works 7 points 1 day ago (4 children)

you need to fully be able to program to work with these things, in my experience.
you have to explain what you want very specifically, in precise programming terms.

i tried a preview of chatgpt codex and it’s working better than my free version of claude, but codex creates a whole virtual programming environment, you have to connect it to a github repository, then it spins up an instance with tools you include and actually tests the code and fixes bugs before sending it back to you.
but you still need to be able to find the bugs and fix them yourself.

oh and i think they work best with python, but i’ve also used ruby and dart and it’s decent.
it’s kinda like a power tool, it’ll definitely help you a lot to fix a car but if you can’t do it with wrenches it won’t help very much.

load more comments (4 replies)
[–] tristynalxander@mander.xyz 3 points 22 hours ago* (last edited 8 hours ago)

Also working on some 3d maths.

I've used the free versions a bit, but not really to the extent that I'd call it vibe coding. The chat bots often know where to find libraries or pre-existing functions that I don't know. It's also okay at algorithms for well defined problems, but it often says be careful not to do something I absolutely need to do or visea versa. It's very hit and miss on debugging. It'll point out obvious stuff (typos) reliably, and it can do some iteration stuff usually, but it usually doesn't pick up on other things. Once in a rare while it will impress me by suggesting I look at a particular thing, and I think it manages this better in new chats, but most complex issues fail for it. I use it as a faster stackoverflow, but you need to be able to work through the code yourself, understand what you're doing, and test that individual steps are doing what they need to do. The bots can't really do any sort of planning or breaking down a problem into sub-problems, and they really suck at thinking about 3d stuff.

[–] silver@das-eck.haus 5 points 1 day ago* (last edited 1 day ago)

I think it's pretty heavily dependent on what you're trying to do. I've gotten a lot of push from higher ups at my company to use copilot wherever possible. So, I've spent a lot of time lately having copilot + opus write code for me. Most of what I'm doing is super straightforward middleware APIs or basic internal front ends. Since it has access to very similar codebases for reference, and we have custom agents that point it in the right direction, it's a pretty good experience.

However, if I ask it to do something totally new, it does okay, more like what you've experienced. It takes a lot of hand holding, but it usually gets the job done as long as you're very descriptive in your prompt. Probably not faster than an experienced developer at the moment though

[–] Feyd@programming.dev 103 points 2 days ago (2 children)

producing subtly broken junk

The difference between you and people that say it's amazing is that you are capable of discerning this reality.

[–] OwOarchist@pawb.social 32 points 2 days ago (11 children)

What I don't get, though, is how the vibe code bros can't discern this reality.

How can they sit there and not see that their vibe-coded app just doesn't do what they wanted it to do? Eventually, you've got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn't work?

[–] favoredponcho@lemmy.zip 1 points 19 hours ago

You do try running the app, and then you see what is broken and then you have Claude fix it. The process is still iterative just like regular coding. I haven't met a software engineer that wrote a perfect app the first try, its always broken, even in subtle ways. Why does everyone think vibecoding needs to be perfect on the first shot?

[–] Lumelore@lemmy.blahaj.zone 25 points 2 days ago* (last edited 2 days ago) (1 children)

Vibe code bros aren't real programmers. They're business people, not computer people. Even if they have a CS degree, they only got that because they think it'll get them more money. They lack passion and they don't care about understanding anything. They probably don't even care about what they're generating beyond its potential to be used in a grift.

I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it'll be more lucrative for them but since they only care about money they didn't bother to actually learn the material especially since they could just vibe code through everything.

[–] b_n@sh.itjust.works 8 points 1 day ago (1 children)

So much this.

After working in tech companies for the last 10 years I've noticed the difference between people that "generate code" and those that engineer code.

My worry about the industry is that vibe coding gives the code generators the ability to generate even more code. The engineers (even those that use vibe tools) are not engineering as much code by volume compared to "the generators".

My hope is that this is one of those "short term gain, long term pain" things that might self correct in a couple of years 🤞.

[–] sobchak@programming.dev 1 points 16 hours ago* (last edited 16 hours ago)

It's insane that companies are going back to metrics like LOC (or tokens generated), when the industry figured out decades ago that these are horrible, counterproductive metrics.

"The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains." - No Silver Bullet (1986)

load more comments (9 replies)
load more comments (1 replies)
[–] sobchak@programming.dev 14 points 1 day ago

Key is having it write tests and have it iterate by itself, and also managing context in various ways. It only works on small projects in my experience. And it generates shit code that's not worth manually working on, so it kind of locks your project in to being always dependent on AI. Being always dependant on AI, and AI hitting a brick wall eventually means you'll reach a point where you can't really improve the project anymore. I.e. AI tools are nearly useless.

[–] x00z@lemmy.world 5 points 1 day ago (1 children)

The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.

load more comments (1 replies)
[–] ZoteTheMighty@lemmy.zip 1 points 1 day ago

That's been my experience. It's always subtlely wrong, its solutions are hard to maintain, and if you spend too much time with it, it starts forgetting what you said earlier. Managers don't understand the distinction, they already can't code well, and only test it in small problems where it's not context-limited, so they're amazed.

[–] favoredponcho@lemmy.zip -4 points 19 hours ago (4 children)

I use it and it works. It doesn't give you the right result in one shot, you iterate and prompt again and again. In the end, it saves a ton of time. Engineers are definitely going to lose their jobs because fewer people are needed. I know its tough to accept this and people will go through denial. Part of that is saying the AI code is junk. But, you'll find it can produce junk and quickly fix it into the right solution faster than an engineer can. It sucks, but this is the new reality. The one thing that is cool once you embrace it is that you realize you can customize your favorite apps or even build anything you want from scratch.

[–] lichtmetzger@discuss.tchncs.de 7 points 16 hours ago* (last edited 16 hours ago) (1 children)

It sucks, but this is the new reality.

Sorry mate, but you drank the AI koolaid from Sam Altman and the other tech oligarchs. The reality is that all of the major AI companies are deep in the red, OpenAI isn't even making a profit with the 200$ subscription.

The only reason people are able to burn thousands of tokens to vibecode their apps is that they don't have to pay the price for that, the companies are. This money will run out soon and then we will see the real cost for the bigger models.

If a subscription for Claude Code costs 500$ or even 1000$, will companies still pay for it or let actual humans do the work? We will see. I seriously doubt it, and I don't want to depend on a subscription-based service to do my work while my skills are atrophying. Thank god my employer doesn't force me to use AI.

[–] favoredponcho@lemmy.zip -1 points 11 hours ago (1 children)

I haven't drank Koolaid. I'm talking from my experience using it in my professional software engineering job where I lead software projects. I've built things that used to take 20 weeks in 1 week with Claude. My employer does not really care about the cost of the tokens. And, when they can have one engineer do 20 weeks of work in 1 week, that to them is actually a cost savings. I already ask myself the question ... Should I give this task to another engineer or just vibecode it myself?

OpenAI may not survive because they do have financial issues from overspending, but that barely matters. The company with the strongest coding LLM is Anthropic and it doesn't sound like they're having financial difficulty. Either way, now that it is clear what is possible, some company will succeed.They have incentives to do it.

Like I said, it will suck for some people, but its hard to deny the reality at this point.

[–] lichtmetzger@discuss.tchncs.de 3 points 9 hours ago (1 children)

I’ve built things that used to take 20 weeks in 1 week with Claude.

That's ridiculous. You've either been a bad coder even before the AI hype or you're simply lying. I have used these tools and they're not that good or make you that fast - except when you're just merging all of the proposed code blind and hope for the best. I fear for the future colleagues who will have to work with the raging dumpster fire you have created for them.

The company with the strongest coding LLM is Anthropic and it doesn’t sound like they’re having financial difficulty

Oh yes, they have the same problems OpenAI has. Just look into the vibecoding subreddits, you can see many people complaining about excessive rate limits and their models getting dumber. A healthy company wouldn't try to put a cap on the token useage and introduce peak-hour throttling, that's a big warning sign that they're overspending as well.

its hard to deny the reality at this point

I only see one person here denying reality. You will be effed in a major way when your employer one day decides that the subscriptions are too expensive or tell you to limit your token useage.

[–] favoredponcho@lemmy.zip 1 points 7 hours ago* (last edited 6 hours ago) (1 children)

I know it is a big change and will take some time to come to terms with it. But, it is here. I’m not going to argue anymore. It’s pointless.

[–] lichtmetzger@discuss.tchncs.de 2 points 6 hours ago

Did you just pull a random infographic out of your ass without even mentioning the source? I reverse-searched it and it comes from Anthropic, of all places - the guys that run Claude Code.

Forbes took a look at that study, I love this money quote from it:

These flaws turn Anthropic’s dataset into an overstated labor-market conclusion. The study’s findings do not have the level of reliability required to sustain the breadth of the headline framing, because each conclusion rests on an exposure measure whose scope (1), construction (2, 3, 4, 5, 7), and interpretation (6, 8, 9, 10) remain contested.

So yeah, an AI company telling us that AI will theoretically replace our jobs, based on their own study with flawed data - damn, that's trustworthy! /s

I’m not going to argue anymore. It’s pointless.

At least on this point we agree.

[–] echodot@feddit.uk 3 points 16 hours ago

You still need programmers because you need people proficient in programming to be able to tell how to fix the junk that it generates into working code.

[–] baatliwala@lemmy.world 1 points 17 hours ago

I think the last part you said is the best way to use LLMs. I am not confident in it building complex architectures but if you want to make a dedicated single use script or a very customised basic application for personal use, it will do it well

[–] speculate7383@lemmy.today 1 points 18 hours ago (1 children)

customize your favorite apps

can you elaborate?

[–] favoredponcho@lemmy.zip 3 points 18 hours ago

Github is full of open source apps. Some times the maintainer won't add a feature you want. You can just clone the repo and ask Claude to do it and then run your own version of it.

[–] cecilkorik@lemmy.ca 52 points 2 days ago* (last edited 2 days ago) (6 children)

No, I think you do get it. That's exactly right. Everything you described is absolutely valid.

Maybe the only piece you're missing is that "almost right, but critically broken in subtle ways" turns out to actually be more than good enough for many people and many purposes. You're describing the "success" state.

/s but also not /s because this is the unfortunate reality we live in now. We're all going to eat slop and sooner or later we're going to be forced to like it.

load more comments (6 replies)
[–] athatet@lemmy.zip 17 points 2 days ago (2 children)

The reason you kept going around in circles and reintroducing bugs you already got rid of is because LLMs don’t remember things. Every time you tell it something it tells it the entire conversation again so it has all the parts. Eventually it runs out of room and starts cutting off the beginning of the convo and now the llm can’t ‘remember’ what it was you were even talking about.

load more comments (2 replies)
[–] Blackmist@feddit.uk 3 points 1 day ago

I think it's mostly going to be useful for boilerplate generation, and effectiveness is going to vary wildly based on what language you're using. JS or Python? It'll probably do OK. Plenty of open source for it to "learn" from. Delphi? Forget it.

Brief experimentation showed it liked to bullshit if it was wrong, rather than fix things.

[–] kunaltyagi@programming.dev 7 points 1 day ago

Don't jump right in to coding.

Take a feature you want, and use the plan feature to break it down. Give the plan a read. Make sure you have tests covering the files it says it'll need to touch. If not, add tests (can use LLM for that as well).

Then let the LLM work. Success rates for me are around 80% or higher for medium tasks (30 mins--1 hour for me without LLM, 15--30 mins with one, including code review)

If a task is 5 mins or so, it's usually a hit or miss (since planning would take longer). For tasks longer than 1 hour or so, it depends. Sometimes the code is full of simple idioms that the LLM can easily crush it. Other times I need to actively break it down into digestible chunks

load more comments
view more: next ›