this post was submitted on 02 Feb 2026
217 points (97.4% liked)

Technology

80273 readers
3714 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] percent@infosec.pub 1 points 23 hours ago* (last edited 14 hours ago)

Can you provide evidence the "more efficient" models are actually more efficient for vibe coding? Results would be the best measure.

Did I claim that? If so, then maybe I worded something poorly, because that's wrong.

My hope is that as models, tooling, and practices evolve, small models will be (future tense) effective enough to use productively so we won't need expensive commercial models.

To clarify some things:

  • I'm mostly not talking about vibe coding. Vibe coding might be okay for quickly exploring or (in)validating some concept/idea, but they tend to make things brittle and pile up a lot of tech debt if you let them.
  • I don't think "more efficient" (in terms of energy and pricing) models are more efficient for work. I haven't measured it, but the smaller/"dumber" models tend to require more cycles before they reach their goals, as they have to debug their code more along the way. However, with the right workflow (using subagents, etc.), you can often still reach the goals with smaller models.

There's a difference between efficiency and effectiveness. The hardware is becoming more efficient, while models and tooling are becoming more effective. The tooling/techniques to use LLMs more effectively also tend to burn a LOT of tokens.

TL;DR:

  • Hardware is getting more efficient.
  • Models, tools, and techniques are getting more effective.