Unless you’ve only ever used ChatGPT, you will know that LLM-produced code is not the result of a single prompt, not even a conversation, but rather a workflow that often goes as such:
- Discuss a problem with the LLM. The LLM autonomously reads large parts of the repository you’re working in, during the discussion.
- Ask it to write a plan. Edit the plan. Ask it about the edited plan. Edit it some more.
- Repeatedly restart the LLM, asking it to code different parts of the plan. Debug the results. Write some code yourself. Create, rebase, or otherwise play around with the repository; keep multiple branches of potential code.
- Go back and edit the original plan, now that you know what might work. Port some unit tests back in time, sometimes.
- Repeat until done.
is so stupid
"blask" slaps idc what anyone says