this post was submitted on 07 Dec 2025
1082 points (98.1% liked)

Technology

77666 readers
3316 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

top 50 comments
sorted by: hot top controversial new old
[–] dejected_warp_core@lemmy.world 48 points 6 days ago (1 children)

To quote your quote:

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

I think the author just independently rediscovered "middle management". Indeed, when you delegate the gruntwork under your responsibility, those same people are who you go to when addressing bugs and new requirements. It's not on you to effect repairs: it's on your team. I am Jack's complete lack of surprise. The idea that relying on AI to do nuanced work like this and arrive at the exact correct answer to the problem, is naive at best. I'd be sweating too.

[–] fuck_u_spez_in_particular@lemmy.world 11 points 6 days ago (1 children)

The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again. AI instead of humans: well maybe the next or different model will fix it maybe...

And what is very clear to me after trying to use these models, the larger the code-base the worse the AI gets, to the point of not helping at all or even being destructive. Apart from dissecting small isolatable pieces of independent code (i.e. keep the context small for the AI).

Humans likely get slower with a larger code-base, but they (usually) don't arrive at a point where they can't progress any further.

[–] MangoCats@feddit.it 3 points 5 days ago

Humans likely get slower with a larger code-base, but they (usually) don’t arrive at a point where they can’t progress any further.

Notable exceptions like: https://peimpact.com/the-denver-international-airport-automated-baggage-handling-system/

[–] Nalivai@lemmy.world 32 points 6 days ago (1 children)

They never actually say what "product" do they make, it's always "shipped product" like they're fucking amazon warehouse. I suspect because it's some trivial webpage that takes an afternoon for a student to ship up, that they spent three days arguing with an autocomplete to shit out.

[–] e461h@sh.itjust.works 7 points 6 days ago

Cloudflare, AWS, and other recent major service outages are what come to mind re: AI code. I’ve no doubt it is getting forced into critical infrastructure without proper diligence.

Humans are prone to error so imagine the errors our digital progeny are capable of!

[–] phed@lemmy.ml 25 points 6 days ago (2 children)

I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn't remember things from 3 messages ago when it should. You have to keep re-explaining the goal to it. It's wholey incompetant. And yea when you have it do stuff you aren't familiar with or don't create, def. I have it write a commentary, or I take the time out right then to ask it what x or y does then I add a comment.

[–] kahnclusions@lemmy.ca 16 points 6 days ago* (last edited 6 days ago) (1 children)

Even worse, the ones I’ve evaluated (like Claude) constantly fail to even compile because, for example, they mix usages of different SDK versions. When instructed to use version 3 of some package, it will add the right version as a dependency but then still code with missing or deprecated APIs from the previous version that are obviously unavailable.

More time (and money, and electricity) is wasted trying to prompt it towards correct code than simply writing it yourself and then at the end of the day you have a smoking turd that no one even understands.

LLMs are a dead end.

[–] MangoCats@feddit.it 4 points 6 days ago (4 children)

constantly fail to even compile because, for example, they mix usages of different SDK versions

Try an agentic tool like Claude Code - it closes the loop by testing the compilation for you, and fixing its mistakes (like human programmers do) before bothering you for another prompt. I was where you are at 6 months ago, the tools have improved dramatically since then.

From TFS > I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

That sounds like a "fractional CTO problem" to me (IMO a fractional CTO is a guy who convinces several small companies that he's a brilliant tech genius who will help them make their important tech decisions without actually paying full-time attention to any of them. Actual tech experience: optional.)

If you have lost confidence in your ability to modify your own creation, that's not a tools problem - you are the tool, that's a you problem. It doesn't matter if you're using an LLM coding tool, or a team of human developers, or a pack of monkeys to code your applications, if you don't document and test and formally develop an "understanding" of your product that not only you but all stakeholders can grasp to the extent they need to, you're just letting the development run wild - lacking a formal software development process maturity. LLMs can do that faster than a pack of monkeys, or a bunch of kids you hired off Craigslist, but it's the exact same problem no matter how you slice it.

load more comments (4 replies)
[–] echodot@feddit.uk 10 points 6 days ago (2 children)

There's no point telling it not to do x because as soon as you mention it x it goes into its context window.

It has no filter, it's like if you had no choice in your actions, and just had to do every thought that came into your head, if you were told not to do a thing you would immediately start thinking about doing it.

[–] kahnclusions@lemmy.ca 4 points 6 days ago* (last edited 6 days ago) (1 children)

I’ve noticed this too, it’s hilarious(ly bad).

Especially with image generation, which we were using to make some quick avatars for a D&D game. “Draw a picture of an elf.” Generates images of elves that all have one weird earring. “Draw a picture of an elf without an earing.” Great now the elves have even more earrings.

load more comments (1 replies)
load more comments (1 replies)
[–] Evotech@lemmy.world 24 points 6 days ago (2 children)

Just ask the ai to make the change?

[–] theneverfox@pawb.social 21 points 6 days ago (13 children)

AI isn't good at changing code, or really even understanding it... It's good at writing it, ideally 50-250 lines at a time

[–] Evotech@lemmy.world 6 points 6 days ago* (last edited 6 days ago) (1 children)

I'm just not following the mindset of "get ai to code your whole program" and then have real people maintain it? Sounds counter productive

I think you need to make your code for an Ai to maintain. Use Static code analysers like SonarQube to ensure that the code is maintainable (cognitive complexity)!and that functions are small and well defined as you write it.

[–] theneverfox@pawb.social 8 points 6 days ago (2 children)

I don't think we should be having the AI write the program in the first place. I think we're barreling towards a place where remotely complicated software becomes a lost technology

I don't mind if AI helps here and there, I certainly use it. But it's not good at custom fit solutions, and the world currently runs on custom fit solutions

AI is like no code solutions. Yeah, it's powerful, easier to learn and you can do a lot with it... But eventually you will hit a limit. You'll need to do something the system can't do, or something you can't make the system do because no one properly understands what you've built

At the end of the day, coding is a skill. If no one is building the required experience to work with complex systems, we're going to be swimming in a world of endless ocean of vibe coded legacy apps in a decade

I just don't buy that AI will be able to take something like a set of State regulations and build a complaint outcome. Most of our base digital infrastructure is like that, or it uses obscure ancient systems that LLMs are basically allergic to working with

To me, we're risking everything on achieving AGI (and using it responsibly) before we run out of skilled workers, and we're several game changing breakthroughs from achieving that

load more comments (2 replies)
load more comments (12 replies)
[–] BarneyPiccolo@lemmy.today 11 points 6 days ago (2 children)

I don't know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I'd try it anyway, because what do you have to lose?

Unless it gets pissed off at being questioned, and destroys the world. I've seen more than few movies about that.

[–] Evotech@lemmy.world 6 points 6 days ago* (last edited 6 days ago) (4 children)

You are in a way correct. If you keep sending the context of the "conversation" (in the same chat) it will reinforce its previous implementation.

The way ais remember stuff is that you just give it the entire thread of context together with your new question. It's all just text in text out.

But once you start a new conversation (meaning you don't give any previous chat history) it's essentially a "new" ai which didn't know anything about your project.

This will have a new random seed and if you ask that to look for mistakes etc it will happily tell you that the last Implementation was all wrong and here's how to fix it.

It's like a minecraft world, same seed will get you the same map every time. So with AIs it's the same thing ish. start a new conversation or ask a different model (gpt, Google, Claude etc) and it will do things in a new way.

[–] TheBlackLounge@lemmy.zip 10 points 6 days ago (2 children)

Doesn't work. Any semi complex problem with multiple constraints and your team of AIs keeps running circles. Very frustrating if you know it can be done. But what if you're a "fractional CTO" and you get actually contradictory constraints? We haven't gotten yet to AIs who will tell you that what you ask is impossible.

[–] Evotech@lemmy.world 3 points 6 days ago

Yeah right now you have to know what's possible and nudge the ai in the right direction to use the correct approach according to you if you want it to do things in an optimized way

load more comments (1 replies)
load more comments (3 replies)
[–] MangoCats@feddit.it 3 points 5 days ago

AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work.

There's an LLM concept/parameter called "temperature" that determines basically how random the answer is.

As deployed, LLMs like Claude Sonnet or Opus have a temperature that won't give the same answer every time, and when you combine this with feedback loops that point out failures (like compliers that tell the LLM when its code doesn't compile), the LLM can (and does) the old Beckett: try, fail, try again, fail again, fail better next time - and usually reach a solution that passes all the tests it is aware of.

The problem is: with a context window limit of 200,000 tokens, it's not going to be aware of all the relevant tests in more complex cases.

[–] DupaCycki@lemmy.world 9 points 5 days ago

Personally I tried using LLMs for reading error logs and summarizing what's going on. I can say that even with somewhat complex errors, they were almost always right and very helpful. So basically the general consensus of using them as assistants within a narrow scope.

Though it should also be noted that I only did this at work. While it seems to work well, I think I'd still limit such use in personal projects, since I want to keep learning more, and private projects are generally much more enjoyable to work on.

Another interesting use case I can highlight is using a chatbot as documentation when the actual documentation is horrible. However, this only works within the same ecosystem, so for instance Copilot with MS software. Microsoft definitely trained Copilot on its own stuff and it's often considerably more helpful than the docs.

[–] lepinkainen@lemmy.world 14 points 6 days ago* (last edited 4 days ago) (1 children)

Same thing would happen if they were a non-coder project manager or designer for a team of actual human programmers.

Stuff done, shipped and working.

“But I can’t understand the code 😭”, yes. You were the project manager why should you?

[–] JcbAzPx@lemmy.world 36 points 6 days ago (10 children)

I think the point is that someone should understand the code. In this case, no one does.

[–] SaveTheTuaHawk@lemmy.ca 4 points 6 days ago

So...like dealing with Oracle.

load more comments (9 replies)
[–] minorkeys@lemmy.world 7 points 6 days ago (2 children)

It looks like a rigid design philosophy that must completely rebuild for any change. If the speed of production becomes fast enough, and the cost low enough, iterating the entire program for every change would become feasible and cost effective.

[–] entropicdrift@lemmy.sdf.org 5 points 6 days ago

... as long as the giant corpos paying through the nose for the data centers continue to vastly underprice their products in order to make us all dependent on them.

Just wait till everyone's using it and the prices will skyrocket.

[–] MangoCats@feddit.it 3 points 5 days ago

I frequently feel that urge to rebuild from ground (specifications) up, to remove the "old bad code" from the context window and get back to the "pure" specification as the source of truth. That only works up to a certain level of complexity. When it works it can be a very fast way to "fix" a batch of issues, but when the problem/solution is big enough the new implementation will have new issues that may take longer to identify as compared with just grinding through the existing issues. Devil whose face you know kind of choice.

load more comments
view more: next ›