178
I Started Programming When I Was 7. I'm 50 Now and the Thing I Loved Has Changed
(www.jamesdrandall.com)
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Follow the wormhole through a path of communities !webdev@programming.dev
I wonder what the venn diagram of people that started coding as kids and people that enjoy vibe coding looks like. Informally, the degens on my squad that started on their parents' computer loathe AI, and the people that stumbled into it in college are all about the vibe code.
I'm one of those the fits in both categories. I've been blown away by what these AI agents are capable of. I've "written" a bunch of scripts that involve parsing and generating code for another tool to consume and it's been able to take over the tedious parts, like writing a function to parse the parameters out of this code, then follow the code it goes into and extract the relationships between the parameters and recreate them another way. It's something I could write the code for, but that code will be mostly undocumented, will contain "quick version that I'll come back later and fix up (but I never get to it because if it works, there's other more productive things to do)", plus some debug code that I'm not sure if I'll need again so it's just there so I can uncomment it instead of writing it again. Not to mention all the typos and sloppy errors along the way that may or may not be easy to find later during compile and testing.
I consider myself a competent coder. AI makes me better, more focused and less sloppy. But that said, my prompts reflect that. I understand that these models aren't really programmers but just correlation engines that have been trained on a ton of programming material. It can tell you the traveling salesman problem is NP but won't necessarily realize that the problem you've asked it to solve is equivalent to the traveling salesman problem. It will happily spit out an identical function to one it did before, just with name differences that are specific to the current thing it is doing rather than just calling the same function. It will pick the least efficient way to do some things. It's not a problem solver, it's a solution predictor, which sounds better but isn't.
So I consider them more like force multipliers rather than adders. If you have the skills, I believe you could use an LLM to make anything (as a development cycle, not "spits out perfect implementation first try"), but if you don't have the skills, you'll struggle a lot even on fairly basic shit simply because you don't how to direct the LLM properly.
But I still watch it produce code with a mixture of awe and fear. I don't think the above will be true forever. Maybe not even for the rest of the 20s.
Probably two distinct circles that don't touch if I base it on people I know and myself.