471

How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can't manage this consistently with CRUD apps and people think that this number isn't laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

....

I don't believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

you are viewing a single comment's thread
view the rest of the comments
[-] IHeartBadCode@kbin.run 126 points 1 week ago

I had my fun with Copilot before I decided that it was making me stupider - it's impressive, but not actually suitable for anything more than churning out boilerplate.

This. Many of these tools are good at incredibly basic boilerplate that's just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

There's a reality to these tools. That reality is they're helpful at times, but they are hardly transformative at the levels the grifters go on about.

[-] 0x0@programming.dev 44 points 1 week ago

I use them like wikipedia: it's a good starting point and that's it (and this comparison is a disservice to wikipedia).

[-] SandbagTiara2816@lemmy.dbzer0.com 11 points 1 week ago

Yep! It’s a good way to get over the fear of a blank page, but I don’t trust it for more than outlines or summaries

[-] deweydecibel@lemmy.world 4 points 1 week ago

I wouldn't even trust it for summaries beyond extremely basic stuff.

[-] ripcord@lemmy.world 4 points 1 week ago

Man, I need to build some new shit.

I can't remember the last time I looked at a blank page.

[-] mPony@lemmy.world 2 points 1 week ago

Blank pages are for the young

[-] grrgyle@slrpnk.net 7 points 1 week ago

I agree with your parenthetical, but Wikipedia actually agrees on your main point: Wikipedia itself is not a source of truth.

[-] sugar_in_your_tea@sh.itjust.works 43 points 1 week ago

I interviewed a candidate for a senior role, and they asked if they could use AI tools. I told them to use whatever they normally would, I only care that they get a working answer and that they can explain the code to me.

The problem was fairly basic, something like randomly generate two points and find the distance between them, and we had given them the details (e.g. distance is a straight line). They used AI, which went well until it generated the Manhattan distance instead of the Pythagorean theorem. They didn't correct it, so we pointed it out and gave them the equation (totally fine, most people forget it under pressure). Anyway, they refactored the code and used AI again to make the same mistake, didn't catch it, and we ended up pointing it out again.

Anyway, at the end of the challenge, we asked them how confident they felt about the code and what they'd need to do to feel more confident (nudge toward unit testing). They said their code was 100% correct and they'd be ready to ship it.

They didn't pass the interview.

And that's generally my opinion about AI in general, it's probably making you stupider.

[-] deweydecibel@lemmy.world 28 points 1 week ago* (last edited 1 week ago)

I've seen people defend using AI this way by comparing it to using a calculator in a math class, i.e. if the technology knows it, I don't need to.

And I feel like, for the kind of people whose grasp of technology, knowledge, and education are so juvenile that they would believe such a thing, AI isn't making them dumber. They were already dumb. What the AI does is make code they don't understand more accessible, which is to say, it's just enabling dumb people to be more dangerous while instilling them with an unearned confidence that only compounds the danger.

[-] AdamBomb@lemmy.sdf.org 10 points 1 week ago

Spot on description

Yup. And I'm unwilling to be the QC in a coding assembly line, I want competent peers who catch things before I do.

But my point isn't that AI actively makes individuals dumber, it's making people in general dumber. I believe that to be true about a lot of technology. In the 80s, people were familiar with command-line interfaces, and jumping to some coding wasn't a huge leap, but today, people can't figure out how to do a thing unless there's an app for it. AI is just the next step along that path, soon, even traditionally competent industries will be little more than QC and nobody will remember how the sausage is made.

If they can demonstrate that they know how the sausage is made and how to inspect a sausage of packages, I'm fine with it. But if they struggle to even open the sausage package, we're going to have problems.

Yeah, I honestly don't have any real issue with using it to accelerate your workflow. I think it's hit or miss how much it does, but it's probably slightly stepped up from code completion without "AI".

But if you don't understand every line of code "you" write completely, you're being grossly negligent and begging for a shitshow.

[-] IHeartBadCode@kbin.run 10 points 1 week ago

Similar story, I had a junior dev put in a PR for SQL that gets lat and long and gives back distance. The request was using the Haversine formula but was using the km coefficient, rather than the one for miles.

I asked where they got it and they indicated AI. I sighed and pointed out why it was wrong and that we had PostGIS and that's there is literally scalar functions available that will do the calculations way faster and they should use those.

There's a clear over reliance on code generation. That said, it's pretty good for things that I can eye scan and verify that's what I would have typed anyway. But I've found it suggesting things I wouldn't remotely permit to things that are "sort of" correct. I'll let it pop on the latter case and go back and clean it up. But yeah, anyone blind trusting AI shouldn't be allowed to make final commits.

I just don't bother, under the assumption that I'll spend more time correcting the mistakes than actually writing the code myself. Maybe that's faulty, as I haven't tried it myself (mostly because it's hard to turn on in my editor, vim).

[-] IHeartBadCode@kbin.run 6 points 1 week ago

Maybe that's faulty, as I haven't tried it myself

Nah perfectly fine take. Each their own I say. I would absolutely say that where it is, not bothering with it is completely fine. You aren't missing all that much really. At the end of the day it might have saved me ten-fifteen minutes here and there. Nothing that's a tectonic shift in productivity.

Yeah, most of my dev time is spent reading, and I'm a pretty fast typist, so I never bothered.

Maybe I'll try it eventually. But my boss isn't a fan anyway, so I'm in no hurry.

[-] SkyeStarfall@lemmy.blahaj.zone 1 points 1 week ago

It can be useful in explaining concepts you're unsure about, in regards to the reading part, but you should always verify that information.

But it has helped me understand certain concepts in the past, where I struggled with finding good explanations using a search engine.

[-] sugar_in_your_tea@sh.itjust.works 1 points 1 week ago* (last edited 1 week ago)

Ah, ok. I'm pretty good with concepts (been a dev for 15-ish years), I'm usually searching for specific API usage or syntax, and the official docs are more reliable anyway. So the biggest win would probably be codegen, but that's also a relatively small part of my job, which is mostly code reviews and planning.

[-] manicdave@feddit.uk 5 points 1 week ago

it's pretty good for things that I can eye scan and verify that's what I would have typed anyway. But I've found it suggesting things I wouldn't remotely permit to things that are "sort of" correct.

Yeah. I haven't bothered with it much but the best use I can see of it is just rubber ducking.

Last time I used it was to asked how to change contrast in a numpy image. It said to multiply each channel by contrast. (I don't even think this is right and it should be ((original value-128) * contrast) + 128) not original value * contrast as it suggested), but it did remind me I can just run operations on colour channels.

Wait what's my point again? Oh yeah, don't trust anyone that can't tell you what the output is supposed to do.

[-] Excrubulent@slrpnk.net 7 points 1 week ago* (last edited 1 week ago)

Wait wait wait so... this person forgot the pythagorean theorem?

Like that is the most basic task. It's d = sqrt((x1 - x2)^2 + (y1 - y2)^2), right?

That was off the top of my head, this person didn't understand that? Do I get a job now?

I have seen a lot of programmers talk about how much time it saves them. It's entirely possible it makes them very fast at making garbage code. One thing I've known for a long time is that understanding code is much harder than writing it, and so asking an LLM to generate your code sounds like it's just creating harder work for you, unless you don't care about getting it right.

[-] sugar_in_your_tea@sh.itjust.works 10 points 1 week ago

Yup, you're hired as whatever position you want. :)

Our instructions were basically:

  1. randomly place N coordinates on a 2D grid, and a random target point
  2. report the closest of those N coordinates to the target point

It was technically different (we phrased it as a top-down game, but same gist). AI generated manhattan distance (abs(x2 - x1) + abs(x2 - x1)) probably due to other clues in the text, but the instructions were clear. The candidate didn't notice what it was doing, we pointed it out, then they asked for the algorithm, which we provided.

Our better candidates remember the equation like you did. But we don't require it, since not all applicants finished college (this one did). We're more concerned about code structure, asking proper questions, and software design process, but math knowledge is cool too (we do a bit of that).

[-] frezik@midwest.social 6 points 1 week ago

College? Pythagorean Theorem is mid-level high school math.

I did once talk to a high school math teacher about a graphics program I was hacking away on at the time, and she was surprised that I actually use the stuff she teaches. Which is to say that I wouldn't expect most programmers to know it exactly off the top of their head, but I would expect they've been exposed to it and can look it up if needed. I happen to have it pretty well ingrained in my brain.

Yes, you learn it in the context of finding the hypotenuse of a triangle, but:

  • a lot of people are "bad" at math (more unconfident), but good with logic
  • geometry, trig, etc require a lot of memorization, so it's easy to forget things
  • interviews are stressful, and good applicants will space on basic things

So when I'm interviewing, I try to provide things like algorithms that they probably know but are likely to space on, and focus on the part I care about: can they reason their way through a problem and produce working code, and then turn around and review their code. Programming is mostly googling stuff (APIs, algorithms, etc), I want to know if they can google the right stuff.

And yeah, we let applicants look stuff up, we just short circuit the less important stuff so they have time to show us the important parts. We dedicate 20-30 min to coding (up to an hour if they rocked at questions and are struggling on code), and we expect a working solution and for them to ask questions about vague requirements. It's a software engineering test, not a math test.

[-] Excrubulent@slrpnk.net 2 points 1 week ago

Yeah, that's absolutely fair, and it's a bit snobby of me to get all up in arms about forgetting a formula - although it is high school level where I live. But to be handed the formula, informed that there's an issue and still not fix it is the really hard part to wrap my head around, given it's such a basic formula.

I guess I'm also remembering someone I knew who got a programming job off the back of someone else's portfolio, who absolutely couldn't program to save their life and revealed that to me in a glaring way when I was trying to help them out. It just makes me think of that study that was done that suggested that there might be a "programmer brain" that you either have or you don't. They ended up costing that company a lot to my knowledge.

[-] xavier666@lemm.ee 4 points 1 week ago

I don't want to believe that coders like these exist and are this confident in an AI's ability to code.

My co-worker said told me another story.

His friend was in a programming class, and made it nearly to the end, when he asked my friend for help. Basically, he had already written the solution, but it wasn't working, and he needed help debugging it. My friend looked at the code, and it looked AI generated because there were obvious mistakes throughout, so he asked his friend to walk him through the code, and that's when his friend admitted to AI generating the whole thing. My friend refused to help.

They do exist, but this candidate wasn't that. I think they were just under pressure and didn't know the issue. The red flag for me wasn't AI or not catching the AI issues, it was that when I asked how confident they were about the code (after us catching the same bug twice), they said 100% and they didn't need any extra assurance (I would've wanted to write tests).

[-] Zikeji@programming.dev 30 points 1 week ago

Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

So it's helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it's not going to do the job itself.

[-] deweydecibel@lemmy.world 16 points 1 week ago* (last edited 1 week ago)

So it's helpful for saving time typing some stuff

Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it'll do the bulk of it, and then I just go through and edit out all little mistakes.

And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they're leaving out the part where they have to edit the output too.

At the end of the day, we've had this technology for a while, it's just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it's just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.

[-] afraid_of_zombies@lemmy.world 4 points 1 week ago

but are good at mimicry.

I know engineers who make over double what I make solely because of that skill.

[-] grrgyle@slrpnk.net 8 points 1 week ago

I think we all had that first moment where copilot generates a good snippet, and we were blown away. But having used it for a while now, I find most of what it suggests feels like jokes.

Like it does save some typing / time spent checking docs, but you have to be very careful to check its work.

I've definitely seen a lot more impressively voluminous, yet flawed pull requests, since my employer started pushing for everyone to use it.

I foresee a real reckoning of unmaintainable codebases in a couple years.

[-] Shadywack@lemmy.world 5 points 1 week ago

Looks like two people suckered by the grifters downvoted your comment (as of this writing). Should they read this, it is a grift, get over it.

this post was submitted on 20 Jun 2024
471 points (89.7% liked)

Technology

55610 readers
2656 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS