this post was submitted on 21 Aug 2025
6 points (60.0% liked)
Programming
26291 readers
617 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My work is using Coderabbit, and I've found its feedback to be pretty helpful - especially since I'm working with a language I don't have a whole lot of experience with (Python). I check what it tells me, but it has taught me some new things. I still want human reviews as well, but the AI can pick up on detail that is easy to skim over.
It doesn't cover bigger picture stuff like maintainability, architecture, test coverage. Recently I reviewed a PR that was likely AI generated; I saw a number of cases where logic duplication invited future bugs. (Stuff like duplicating access predicates across CRUD handlers for the same resource, repeating the same validation logic in multiple places.) Also magic strings instead of enums, tests of dubious value. Coderabbit did not comment on those issues.
I'm also getting feedback from Sonarqube on the same project, which I think is static analysis. It's much less helpful. It has less to say, and a lot of that is pointing out security issues in test code.