this post was submitted on 17 Feb 2025
412 points (95.2% liked)
Technology
63009 readers
4251 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As someone who has interviewed candidates for developer jobs for over a decade: this sounds like “in my day everything was better”.
Yes, there are plenty of candidates who can’t explain the piece of code they copied from Copilot. But guess what? A few years ago there were plenty of candidates who couldn’t explain the code they copied from StackOverflow. And before that, there were those who failed at the basic programming test we gave them.
We don’t hire those people. We hire the ones who use the tools at their disposal and also show they understand what they’re doing. The tools change, the requirements do not.
I think that LLMs just made it easier for people who want to know but not learn to know. Reading all those posts all over the internet required you to understand what you pasted together if you wanted it to work (not always but the barr was higher). With ChatGPT, you can just throw errors at it until you have the code you want.
While the requirements never changed, the tools sure did and they made it a lot easier to not understand.
Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.
It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.
If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.
And then you get people like OP, blaming the generation while if anything its them and their company to blame... for falling behind. Got to keep up folks. Our field moves fast.
My rule of thumb: Use ChatGPT for questions whos answer I already know.
Otherwise it hallucinates and tries hard in convincing me of a wrong answer.
I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I'm to lazy to look search. I say sometimes because othertimes it is just dead wrong.
All code I ask ChatGPT to write is usually along the lines for "I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is" and then I work with the code it gives me from there. I never take it at face value.
I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.
I agree with you on that.