this post was submitted on 12 Mar 2026
57 points (90.1% liked)
Programming
26036 readers
341 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Good example of why I don't rely on technology I don't control. I want my workflow to be future-proof and have a predictable cost.
Yes, when anyone proposes building our tools on top of these services I ask "what will happen to this when they start charging what it really costs to run these models?"
In general I agree with you, but llms are the one exception where it's not practical and not cost effective to run them locally. If you want to use them, the better option is by far to pay someone for the service.
That's because now is the phase where they let you try the good stuff cheaper to hook you up.
Then second best option is an inference provider for open weight models, so at least if they raise the price or stop offering it you can get it from someone else or eventually upgrade to self hosting.
I agree. I use openrouter myself.
https://openrouter.ai/