this post was submitted on 30 Dec 2025
89 points (94.1% liked)
Programming
24117 readers
555 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is extremely valid.
The biggest reason I'm able to use LLMs efficiently and safely, is because of all my prior experience. I'm able to write up all the project guard rails, the expected architecture, call out gotchas, etc. These are the things that actually keep the output in spec (usually).
If a junior hasn't already manually established this knowledge and experience, much of the code that they're going to produce with AI is gonna be crap with varying levels of deviation.
How I guide the juniors under me is to have it generate singular methods to accomplish specific tasks, but not entire classes/files.
You know it's crazy, someone just told me that's more dangerous than having them do nothing.
They use it with heavy oversight from the senior devs. We discourage its use and teach them the very basic errors it always produces as a warning not to trust it.
E.G. that ChatGPT will always dump all of the event handlers for a form in one massive method.
We use it within the scope of things we already know about.