this post was submitted on 14 May 2026
235 points (95.7% liked)

Technology

84699 readers
3194 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] shirro@aussie.zone 29 points 1 day ago* (last edited 1 day ago) (1 children)

I love tech. My brain loves soaking up new things. Currently writing my first ever game engine in my 50s in c with my kid based on books and include files. Better late than never.

The technology was never the problem. It's the money people. Always was. The Marxists got that bit right. Some of the tech bros are from a tech background but their culture and motivations aren't like mine.

The money person these days follows the drug pusher/pimp model. They want to control you and have you on a hook. Everything has gaming machine mechanisms built in to keep you coming back. You can't walk away. They have all your data, all your connections. You are helpless. A victim, but you walked right into it. Final victory for them is to lobotomise all your higher order thinking skills. Your just a body to lie there and be fucked.

[–] FosterMolasses@leminal.space 1 points 11 hours ago* (last edited 11 hours ago) (1 children)

Sounds fun! Any plans to release something on itch/steam?

[–] Danquebec@sh.itjust.works 2 points 8 hours ago

Final victory for them is to lobotomise all your higher order thinking skills. Your just a body to lie there and be fucked.

Sounds fun! Any plans to release something on itch/steam?

[–] arcine@jlai.lu 29 points 1 day ago (15 children)

The "correct" way to use AI for coding (and anything really) is to ask for explanations / tutorials when you can't find one online, then learn from that.

Never let it do something for you. That's how you lose. If you're not actively learning, you're actively rotting, and that goes for life in general too.

[–] white_nrdy@programming.dev 2 points 10 hours ago (1 children)

I have started using LLM tools recently after taking a new job where a lot of people do it. I've discovered that it's actually fairly helpful not only for explanations, but in two other respects

  • Sifting through immense amounts of documentation. I have to deal with some datasheets that are hundreds of pages, where there will be info scattered throughout. It's very helpful sifting through those.
  • Doing boiler plate "plumbing work" in my code. I'm mostly drawing a line where I don't want it doing the "core" work in that which I'm an expert, since I agree that if I stop doing that, I'll atrophy. However it can help accelerate my process if I pass off some of the minutiae that I don't feel the need to do.

However all that said, I am honestly pretty impressed how well it works. I've mostly been using Claude, and damn, it's honestly pretty competent. I had it make me a helper Python GUI program for me to test some stuff (I'm not a UI/high level engineer like that, I'm an FPGA Engineer), and it did a decent job. It definitely needed a good amount of massaging and guidance. However I can definitely see the appeal, and I think it's a slippery slope, and I need to make sure I remain disciplined in not letting it do everything

[–] MangoCats@feddit.it 0 points 10 hours ago (1 children)

One trap is to trust it as a means to accomodate unreasonable schedule pressure.

Sure - this thing looks like it works, hell it probably does work, do you really want to launch a probably works product? If your management does - consider shopping around for a raise/promotion under different management. It's never easy to move, but if you're moving on your own terms you can often make the effort worth your while.

Another note: I find the LLMs to be wickedly detail oriented code reviewers - like, they'll point out the tiniest little discrepancies and edge cases, and what they (Claude, at least) report is usually "real." Now, that doesn't mean they find everything that's wrong on the first pass, but once you've addressed everything in the first pass, you can make a second pass, and a third, etc. each time with different focus: documentation complete? implementation functions as intended? technical debt? test coverage? security issues? issues with maintainability? documentation in sync with implementation? specific aspect of implementation functions as intended? etc. - if you address all the findings after each review cycle (and addressing a finding can be clarifying a requirement to relax about certain unimportant aspects...) eventually the findings slow down / only find ridiculously unimportant things.

[–] FaceDeer@fedia.io 1 points 8 hours ago (1 children)

A thing I found quite amusing about the AI agents I've toyed with is that they have a step where they do a code review of their changelist, usually switching to a different "persona" when they write it so that they're not seeing it as "their own" code. It's funny reading at the critiques and compliments it gives the "other agent" it's checking the changes for.

I haven't seen this feature yet, but it might be a good future enhancement to ensure that the harness literally uses a different model for the code review from the one that wrote the code in the first place. If Claude wrote the code have GPT do the review, and vice versa, for example. Wouldn't be surprised if the feature exists and I just haven't spotted it yet though, things change fast.

[–] MangoCats@feddit.it 1 points 8 hours ago (1 children)

I use Cursor for work (Claude Code at home), and Cursor gives the option to select your model. I've dabbled a bit with GPT for the review of Claude code - haven't found anything dramatically better doing that than just Claude prompted to "wear the reviewer hat now."

[–] FaceDeer@fedia.io 2 points 8 hours ago

Yeah, I wouldn't use a framework that didn't let you select the basic model. I'm just thinking about having it automatically switch to a different one during the review "phase". It's not as popular a coding agent these days but I like using Google's Antigravity and it's capable of being told to go through the sequence of steps "plan - > write documentation -> implement the plan -> run unit tests -> do a code review" automatically without needing to be prompted at each step. That's where it would be nice to have it automatically switch for the review.

"Wear the reviewer hat now" does seem to work quite well with the same model, but if more models from different lineages are available it just seems like the right thing to do to switch to another one.

[–] Gsus4@mander.xyz 1 points 9 hours ago

Ask it what the Helvetica scenario is.

So Using it as my emotional dumbing machine is wrong ?

[–] Hiro8811@lemmy.world 8 points 1 day ago (2 children)

I don't think that's a good idea, if you can't find an explanation online that means that there's not much info available in which case the best thing would be to ask on a forum, that way other people that look for that info will find it.

[–] arcine@jlai.lu 3 points 13 hours ago (1 children)

Usually, the LLM's response will be incomplete or partially incorrect, but it's often good enough to get un-stuck.

Usually it will have some keywords you can look up, some bits that bring up further questions for you to answer (and for which the LLM should also not be your first choice).

[–] MangoCats@feddit.it 1 points 10 hours ago

There's also the aspect of giving the LLM another prompt, and another, and another - have it build up a local documentation set based on its internet research, continue to research and refine the local documentation set keeping the things you trust, filtering out the sketchy stuff...

Ultimately, the LLM is a tool you are using. If you use it like a 4th grader copying a paragraph out of the Encyclopedia - you're probably not going to get great results; especially because today's reference materials aren't highly edited / vetted material like encyclopedias were, today's reference materials are internet forums full of self-confident idiots blathering on about whatever they think they know something about (like me, here...) So, back to the tool thing: if you use it well, you can make nice things. If you use it lazily, don't be surprised when your boss decides he doesn't need you at your salary to push that button.

[–] Shayeta@feddit.org 10 points 1 day ago

Not really, google results have been just that bad for the last 10 years. I can spend 10min looking for a piece of documentation on something and not find it. Or I can prompt an internet-connected AI and have it spit out links to relevant docs. It's gotten THAT bad.

load more comments (11 replies)
[–] Treczoks@lemmy.world 15 points 1 day ago (1 children)

The main problem here are the software developers who don't notice their brain rot.

[–] MangoCats@feddit.it 2 points 9 hours ago

Self awareness is all too rare.

[–] FaceDeer@fedia.io 37 points 2 days ago (9 children)

Developers who are told to use AI whether they like it or not, however, tell a different story.

Well there's the problem.

I'm a software developer and I say that AI is the greatest force-multiplier that's been introduced into the field since the compiler. I love using it, it handles the most tedious and annoying parts of the process. But there are situations I don't want to use it in, and of course being forced to use would give me a more negative opinion of it. Obviously.

[–] scarabic@lemmy.world 2 points 11 hours ago

I didn’t read this as “people who like it in some situations being forced to use it in other situations,” but rather people who are against it as a whole being forced to use it at all. And yeah those folks are going to have a bad time, and won’t be in their jobs long. Just facts.

[–] MangoCats@feddit.it 0 points 9 hours ago

In the late 1980s there was a time where we seriously weighed the option of hand assembly vs using compilers and hand assembly didn't always lose. In the early 1990s I wanted to use C++ but the available compiler for IBM compatible PCs was too buggy to be of value.

By the mid 1990s that had changed, good C compilers were exceeding all but the highest effort human assembly code - if you didn't like how it looked in assembly, you could much more easily "fix it" with a tweak to the C code instead of the assembly. I feel like we're sort of getting there with AI agent LLMs today - if you don't like what it provided, tell it why and let it try again - it's usually faster and easier and gets a better product for the time invested to use the tool instead of calling it a slop box and doing it yourself.

[–] takeda@lemmy.dbzer0.com 7 points 1 day ago

I’m a software developer and I say that AI is the greatest force-multiplier that’s been introduced into the field since the compiler.

As a person who works with coworkers who fully embraced it, it doesn't look like they are any faster. There is one group that is faster, but they don't verify their code and provide burden of it on another person who reviews PR to go through their shit code (sorry, but it is unnecessarily complex, does things in weird ways, I've seen it had bugs that even canceled each other (I guess this is probably due to re-running until things work))

[–] RiverRabbits@lemmy.blahaj.zone -1 points 12 hours ago

people shit down your throat and you celebrate and beg for more. Absolute clownshow.

[–] moustachio@lemmy.world 10 points 1 day ago (7 children)

There isn’t any credible evidence out there that actually shows LLMs are a “force multiplier.” That is almost certainly just a made up marketing term for unprofitable chatbot companies.

[–] lepinkainen@lemmy.world 4 points 1 day ago

There are way too many ways to use LLMs for programming to make a blanket statement

[–] FaceDeer@fedia.io 5 points 1 day ago (2 children)

In this case the evidence is literally first-hand experience. There is nothing that will change my mind on this because it's my direct personal experience from actual use.

I honestly don't care what marketing says, and if other people have different experiences then that's just them. In my personal actual real-world experience I found that they let me get tons more done and their quality of work is perfectly fine as long as you're using the right tools and giving them the right instructions.

The article says that developers are disagreeing with that in situations where they are "forced" to use AI, and that's fair, it doesn't make sense to force a tool to be used for something it's not good at. They might be using it wrong. I use it whenever it's better than not using it, and that ends up being quite often in my workflow.

[–] MangoCats@feddit.it 1 points 9 hours ago

using the right tools and giving them the right instructions.

The right tools is definitely key. Back an eternity ago, like October 2025, there was only Claude IMO if you wanted anything bigger than about a page of code. The others have come a long way - better than Claude was then, and I still feel like Claude is out in front, though by a less dramatic margin now.

As for "the right instructions" - I'd say it's more of "use the right process" which basically involves applying all those best practices that have developed over the past decades for human development, but we old farts from back before their time "don't need all that, it's a waste of time" because, basically, we internally practice most of the discipline without doing the documentation. With the AI tools: document your requirements, your architecture, tool choice selection process, designs, development plan, comment the code with traceability to why the code is being written, unit and integration tests, reviews, lessons learned, etc. etc. Having all that documentation kept with the project, well organized, is key to "bringing the AI agent up to speed" which you may be doing often. They really do demonstrate the eternal sunshine of the spotless mind, so if you have them take the time to write everything relevant down as they go (not just the code), then when a new one comes online it can jump into the middle of a development plan without repeating (as many) mistakes / making (as many) bad assumptions.

To be brutally honest, working with AI coding agents reminds me a LOT of working with overseas programmer consultants - if you don't get everything in writing you're gonna have a bad time.

[–] innermachine@lemmy.world 4 points 1 day ago (1 children)

Unfortunately your being downvoted by the echo chamber participants that have to make sure you know that your opinion is wrong and theirs is better. AI is a tool, just like my impact gun. Yea there are times where you absolutely should not use an impact gun on something, but it's THE tool for some situations. And yea, using an impact gun where you should t will get you in trouble just like using AI in situations you shouldn't will get you in trouble. There is nothing new on that front!

load more comments (1 replies)
load more comments (5 replies)
[–] neclimdul@lemmy.world 14 points 2 days ago (26 children)

I kind of agree it's a multiplier. But so far every time I've had it do something its written such an ugly turd I have to rewire it all taking more time than if I'd just solved the problem to start with. Maybe someday but it's not up to the quality I expect of development.

[–] scarabic@lemmy.world 0 points 11 hours ago (1 children)

I got a lot of garbage when I didn’t know what I was doing and just tried AI once or twice a week with lazy prompts, expecting perfection without iterations. I’d huff and crow about how I had to fix things, whereas now I just tell it what to fix, or even better how to get it right the first time. I’ve built up my library of skills and prompts and refined them quite a bit. The models keep getting smarter. You should really look at your tools and methods - sounds like you’re stuck in 2024.

[–] MangoCats@feddit.it 1 points 8 hours ago

I've been using it rather heavily since about October of last year, I definitely do notice the models getting better, the tools around the models starting to do some things automatically that I had to manually prompt for last year (especially remembering key instructions). I also believe I am getting better at using them, how much that contributes to my overall results is extremely hard to quantify, but the feeling is definitely there. Like - last October I used to "just ask" for things without having a documented set of requirements. Today, I just know that the requirements document is necessary when the level of complexity is above... well, above a one-off simple example of how to do something relatively trivial.

I kind of agree it’s a multiplier.

It's definitely a force multiplier, it's just that the factor after the X can be less than 1.0.

load more comments (24 replies)
load more comments (3 replies)
load more comments
view more: next ›