this post was submitted on 02 Feb 2026
197 points (97.6% liked)

Technology

80273 readers
3626 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] ExLisper@lemmy.curiana.net 13 points 13 hours ago

Interesting. I thought this will be another post about slop PRs and bug reports but no, it's about open source project not being promoted by AI and missing on adoption and revenue opportunities.

So I think we definitely see (and will see more) 'templatization' of software development. Some ways of writing apps that are easy to understand for AI and are promoted by it will see wider and wider adoption. Not just tools and libraries but also folder structures, design patterns and so on. I'm not sure how bad this will be long term. Maybe it will just stabilize tooling? Do we really need new React state management library every 6 months?

Hard to tell how will this affect the development of proper tools (not vibe coded ones). Commercial tools struggling to get traction will definitely suffer but most of the libraries I use are hobby projects. I still see good tools with good documentation getting enough attention to grow, even fairly obscure ones. Then again, those tools often struggle with getting enough contributors... Are we going to see a split between vibe coded template apps for junior devs and proper tools for professionals? Will EU step in and found the core projects? I still see a way forward so I'm fairly optimistic but it's really hard to predict what will happen in a couple of years.

[–] Phoenix3875@lemmy.world 5 points 13 hours ago

The killing part is not necessarily people vibe coding programs into OSS projects, but even if the OSS itself is not vibe coded, people using AI to integrate with it will result in lower engagement and thus killing the ecosystem:

Together, these patterns suggest that AI mediation can divert interaction away from the surfaces where OSS projects monetize and recruit contributors.

From Section 2.3 of the reported paper.

[–] WanderingThoughts@europe.pub 113 points 23 hours ago* (last edited 23 hours ago) (4 children)

Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn't way cheaper than a taxi now.

[–] blaggle42@lemmy.today 21 points 23 hours ago
[–] Zwuzelmaus@feddit.org 7 points 19 hours ago (2 children)

until AI investor money dries up

Is that the latest term for "when hell freezes over"?

[–] massacre@lemmy.world 17 points 18 hours ago

Microsoft steeply lowered expectations on the AI Sales team, though they have denied this since they got pummelled in their quarterly and there's been a lot of news about how investors are not happy with all the circular AI investments pumping those stocks. When the bubble pops (and all signs point to that), investors will flee. You'll see consolidation, buy-outs, hell maybe even some bullshit bailouts, but ultimately it has to be a sustainable model and that means it will cost developers or they will be pummeled with ads (probably both).

A Majority of CEOs are saying their AI spend has not paid off. Those are the primary customers, not your average joe. MIT reports 95% generative AI failure rate at companies. Altman still hasn't turned a profit. There are Serious power build-out problems for new AI centers (let alone the chips needed). It's an overheated reactionary market. It's the Dot Com bubble all over again.

There will be some more spending to make sure a good chunk of CEOs "add value" (FOMO) and then a critical juncture where AI spending contracts sharply when they continue to see no returns, accelerated if the US economy goes tits up. Then the domino's fall.

[–] WanderingThoughts@europe.pub 4 points 15 hours ago

Hah, they wish. It's a business, and they need a return on investment eventually. Maybe if we were in a zero interest rate world again, but even that didn't last.

[–] percent@infosec.pub 3 points 21 hours ago (3 children)

I wouldn't be surprised if that's only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we'll be able to run them on consumer or prosumer-grade hardware.

GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.

[–] WanderingThoughts@europe.pub 14 points 20 hours ago (2 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window. Debugging something vague doesn't work. Fact checking isn't something they do well.

[–] VibeSurgeon@piefed.social 3 points 15 hours ago (1 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window.

There's a remarkably effective solution for this, that helps both humans and models alike - write documentation.

It's actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?

[–] WanderingThoughts@europe.pub 3 points 15 hours ago (2 children)

High-quality documentation assumes there's someone with experience working on this. That's not the vibe coding they're selling.

[–] VibeSurgeon@piefed.social 2 points 15 hours ago

Complete hands-off no-review no-technical experience vibe coding is obviously snake oil, yeah.

This is a pretty large problem when it comes to learning about LLM-based tooling: lots of noise, very little signal.

[–] Zos_Kia@lemmynsfw.com 1 points 13 hours ago

I am not aware of what they are selling but every vibe coder i know produces obsessive amounts of documentation. It's kind of baked into the tool (if you use Claude Code at least), it will just naturally produce a lot of documentation.

[–] percent@infosec.pub 6 points 19 hours ago* (last edited 19 hours ago) (2 children)

They don't need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.

I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.

It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn't have configs/docs/optimizations for LLMs, and/or you haven't figured out a decent workflow, then they'll be underwhelming and significantly less productive.

(I know I'll get downvoted just for describing my experience and observations here, but I don't care. I miss the pre-LLM days very much, but they're gone, whether we like it or not.)

[–] WanderingThoughts@europe.pub 2 points 15 hours ago (1 children)

It actually takes a bit of skill to set up a decent workflow/configuration for these things

Exactly this. You can't just replace experienced people with it, and that's basically how it's sold.

[–] percent@infosec.pub 2 points 11 hours ago

Yep, it's a tool for engineers. People who try to ship vibe-coded slop to production will often eventually need an engineer when things fall apart.

[–] RIotingPacifist@lemmy.world 3 points 18 hours ago* (last edited 18 hours ago)

This sounds a lot like every framework, 20 years ago you could have written that about rails.

Which IMO makes sense because if code isn't solving anything interesting then you can dynamically generate it relatively easily, and it's easy to get demos up and running, but neither can help you solve interesting problems.

Which isn't to say it won't have a major impact on software for decades, especially low-effort apps.

[–] XLE@piefed.social 4 points 19 hours ago (1 children)

Can you cite some sources on the increased efficiency? Also, can you link to these lower priced, efficient (implied consumer grade) GPUs and TPUs?

[–] percent@infosec.pub 2 points 18 hours ago (1 children)

Oh, sorry, I didn't mean to imply that consumer-grade hardware has gotten more efficient. I wouldn't really know about that, but I assume most of the focus is on data centers.

Those were two separate thoughts:

  1. Models are getting better, and tooling built around them are getting better, so hopefully we can get to a point where small models (capable of running on consumer-grade hardware) become much more useful.
  2. Some modern data center GPUs and TPUs compute more per watt-hour than previous generations.
[–] XLE@piefed.social 1 points 18 hours ago (2 children)

Can you provide evidence the "more efficient" models are actually more efficient for vibe coding? Results would be the best measure.

It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).

[–] percent@infosec.pub 1 points 11 hours ago* (last edited 2 hours ago)

Can you provide evidence the "more efficient" models are actually more efficient for vibe coding? Results would be the best measure.

Did I claim that? If so, then maybe I worded something poorly, because that's wrong.

My hope is that as models, tooling, and practices evolve, small models will be (future tense) effective enough to use productively so we won't need expensive commercial models.

To clarify some things:

  • I'm mostly not talking about vibe coding. Vibe coding might be okay for quickly exploring or (in)validating some concept/idea, but they tend to make things brittle and pile up a lot of tech debt if you let them.
  • I don't think "more efficient" (in terms of energy and pricing) models are more efficient for work. I haven't measured it, but the smaller/"dumber" models tend to require more cycles before they reach their goals, as they have to debug their code more along the way. However, with the right workflow (using subagents, etc.), you can often still reach the goals with smaller models.

There's a difference between efficiency and effectiveness. The hardware is becoming more efficient, while models and tooling are becoming more effective. The tooling/techniques to use LLMs more effectively also tend to burn a LOT of tokens.

TL;DR:

  • Hardware is getting more efficient.
  • Models, tools, and techniques are getting more effective.
[–] Zos_Kia@lemmynsfw.com 1 points 13 hours ago

I think this kind of claim really lies in a sour spot.

On the one hand it is trivial to get an IDE, plug it to GLM 4.5 or some other smaller more efficient model, and see how it fares on a project. But that's just anecdotal. On the other hand, model creators do this thing called benchmaxing where they fine-tune their model to hell and back to respond well to specific benchmarks. And the whole culture around benchmarks is... i don't know i don't like the vibe it's all AGI maximalists wanking to percent changes in performance. Not fun. So, yeah, evidence is hard to come by when there are so many snake oil salesmen around.

On the other hand, it's pretty easy to check on your own. Install opencode, get 20$ of GLM credit, make it write, deploy and monitor a simple SaaS product, and see how you like it. Then do another one. And do a third one with Claude Code for control if you can get a guest pass (i have some hit me up if you're interested).

What is certain from casual observation is that yes, small models have improved tremendously in the last year, to the point where they're starting to get usable. Code generation is a much more constrained world than generalist text gen, and can be tested automatically, so progress is expected to continue at breakneck pace. Large models are still categorically better but this is expected to change rapidly.

[–] Infernal_pizza@lemmy.dbzer0.com 1 points 14 hours ago (1 children)

They've thought of that as well, soon nobody will be able to afford consumer grade hardware

[–] percent@infosec.pub 2 points 10 hours ago

Yeah true. I'm assuming (and hoping) that the problems with consumer grade hardware being less accessible will be temporary.

I have wristwatches with significantly higher CPU, memory, and storage specs than my first few computers, while consuming significantly less energy. I think the current state of LLMs is pretty rough but will continue to improve.

[–] philodendron 11 points 16 hours ago (2 children)

I just wanna say that's such a good thumbnail

[–] MonkderVierte@lemmy.zip 2 points 10 hours ago* (last edited 2 hours ago)

A Matrix guard thing but with cat details?

Btw, how do typesetters call that kind of image? I've seen "hero-image" in some newspapers' html/css.

[–] QuandaleDingle@lemmy.world 3 points 13 hours ago

Oh yeah. If it was drawn by AI, well, it sure fooled me.

[–] TropicalDingdong@lemmy.world 48 points 23 hours ago (1 children)

Vibe coding is a black hole. I've had some colleagues try and pass stuff off.

What I'm learning about what matters is that the code itself is secondary to the understanding you develop by creating the code. You don't create the code? You don't develop the understanding. Without the understanding, there is nothing.

[–] Feyd@programming.dev 26 points 22 hours ago (1 children)

Yes. And using the LLM to generate then developing the requisite understanding and making it maintainable is slower than just writing it in the first place. And that effect compounds with repetition.

[–] Paragone@lemmy.world 4 points 20 hours ago (1 children)

TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.

I wonder if using LLM's as editors, instead of writers, would be better-use for the things?

_ /\ _

[–] Whostosay@sh.itjust.works 9 points 20 hours ago

A second pair of eyes has always been an acceptable way to use this imo, but it shouldnt be primary

[–] RalfWausE@feddit.org 10 points 18 hours ago

If the abominable intelligence is killing every corner of things we consider good its time to start killing the "AI"...

[–] MadMadBunny@lemmy.ca 34 points 1 day ago (1 children)

How AI is killing everything.

[–] frunch@lemmy.world 1 points 9 hours ago

Which is really it's purpose, as far as i can see

[–] statelesz@slrpnk.net 24 points 23 hours ago (4 children)

LLMs definitely kills the trust in open source software, because now everything can be a vibe-coded mess and it's sometimes hard to check.

[–] RmDebArc_5@feddit.org 32 points 23 hours ago (2 children)

LLMs definitely kills the trust in ~~open source~~ software, because now everything can be a vibe-coded mess and it's sometimes hard to check.

[–] nodiratime@lemmy.world 1 points 12 hours ago

I don't trust proprietary software anyway.

[–] bryndos@fedia.io 10 points 23 hours ago (3 children)

Might make open source more trustworthy, It can't be any harder to check than closed source.

load more comments (3 replies)
[–] rozodru@piefed.social 8 points 23 hours ago (9 children)

yeah it's to the point now where if I see emojis in the readme.md on the repo I just don't even bother.

[–] mintiefresh@piefed.ca 8 points 23 hours ago (1 children)

I used to use emojis in my documentation very lightly because I thought they were a good way to provide visual cues. But now with all the people vibe coding their own readme docs with freaking emojis everywhere I have to stop using them.

Mildly annoying.

[–] Feyd@programming.dev 5 points 23 hours ago (1 children)

✨ especially this one ✨

[–] dgriffith@aussie.zone 8 points 20 hours ago* (last edited 20 hours ago)

Is the ✨sparkly emoji✨ the <BLINK> of the 21st century? Discuss.

load more comments (8 replies)
load more comments (2 replies)
[–] phil@lymme.dynv6.net 8 points 21 hours ago

Open source is not only about publishing code: it's about quality, verifiable, reproducible code at work. If LLMs can't do that, those "vibe coding" projects will hit a hard wall. Still, it's quite clear they badly impact the FOSS ecosystem.

load more comments
view more: next ›