this post was submitted on 11 Jul 2025
340 points (100.0% liked)
TechTakes
2057 readers
442 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You have to know what an AI can and can't do to effectively use AI.
Finding bugs is on of the worst things to "vibe code": LLM can't debug programs (at least as far as I know) and if the repository is bigger than the context window they can't even get a overview of the whole project. LLMs only can run the program and guess what the error is based on the error messages and user input. They can't even control most programs.
I'm not surprised by the results, but it's hardly a fair assessment of the usefulness of AI.
Also I would prefer to wait for the LLM and see if it can fix the bug than hunt for bugs myself - hell, I could solve other problems while waiting for the LLM to finish. If it's successful great, if not I can do it myself.
"This study that I didn't read that has a real methodology for evaluating LLM usefulness instead of just trusting what AI bros say about LLM usefulness is wrong, they should just trust us, bros", that's you