this post was submitted on 04 Jul 2025
188 points (99.0% liked)

Programming

21420 readers
290 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] TehPers@beehaw.org 20 points 3 days ago (4 children)

The main value I found from Copilot in vscode back when it first released was its ability to recognize and continue patterns in code (like in assets, or where you might have a bunch of similar but slightly different fields in a type that are all initialized mostly the same).

I don't use it anymore though because I found the suggestions to be annoying and distracting most of the time and got tired of hitting escape. It also got in the way of standard intellisense when all I needed was to fill in a method name. It took my focus away from thinking about the code because it would generate plausible looking lines of code and my thinking would get pulled in that direction as a result.

With "agents" (whatever that term means these days), the article describes my feelings exactly. I spend the same amount of time verifying a solution as I would just creating the solution myself. The difference is I fully understand my own code, but I can't reach that same understanding of generated code as fast because I didn't think about writing it or how that code will solve my problem.

Also, asking an LLM about the generated code is about as reliable as you'd expect on average, and I need it to be 100% reliable (or extremely close) if I'm going to use it to explain anything to me at all.

Where I found these "agents" to be the most useful is expanding on documentation (markdown files and such). Create a first draft and ask it to clean it up. It still takes effort to review that it didn't start BSing something, but as long as what it generates is small and it's just editing an existing file, it's usually not too bad.

[โ€“] towerful@programming.dev 1 points 3 days ago

I don't use it anymore though because I found the suggestions to be annoying and distracting most of the time and got tired of hitting escape

Same. It took longer for me to parse and validate the suggestion as it did for me to just type what I wanted.

I do like the helper for more complex refractors.
Where you have a bunch of similar, but not exactly the same, changes to make.
Where a search & replace refactor isn't enough.
It manages to figure out what you are doing, highlights the next instance of it and suggests the replacement.
I don't think I've seen it make a mistake doing that, and it is a useful speedup.
I guess the LLM already has all the context: the needle, the haystack and the term.

load more comments (3 replies)