this post was submitted on 04 Jul 2025
224 points (100.0% liked)
TechTakes
2034 readers
349 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless itβs amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
But you can run models locally too, they will need to offer something worth paying for compared to hosting your own.
Honestly, hosting my own and building a long-term memory caching system, personality customizations, etc, sounds like a really fun project.
Edit: Is ChatGPT downvoting us? π
Tonight, I installed Open Web UI to see what sort of performance I could get out of it.
My entire homelab is a single n100 mini, so it was a bit of squeeze to add even Gemma3n:e2b onto it.
It did something. Free chatgpt is better performance, as long as I remember to use place holder variables. At least for my use case: vibe coding compose.yamls and as a rubber duck/level 0 tech support for trouble shooting. But it did something, I'm probably going to re-test when I upgrade to 32gb of ram, then nuke the LXC and wait till I have a beefier host though.
case in point: you jacked off all night over your local model and still got a disappointing result