this post was submitted on 26 Apr 2025
582 points (97.7% liked)
Microblog Memes
7480 readers
1735 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Inference costs are very, very low. You can run Mistral Small 24B finetunes that are better than GPT-4o and actually quite usable on your own local machine.
As for training costs, Meta's LLAMA team displace their emissions with environmental programs, which is more green than 99.9% of any company making any product you use
TLDR; don't use ClosedAI use Mistral or other foss projects
EDIT: I recommend cognitivecomputations Dolphin 3.0 Mistral Small R1 fine tune in particular. I've only used it for mathematical workloads in truth, but it has been exceedingly good at my tasks thus far. The training set and the model are both FOSS and uncensored. You'll need a custom system prompt to activate the Chain of Thought reasoning, and you'll need a comparatively low temperature to keep the model from creating logic loops for itself (0.1 - 0.4 range should be OK)
self-hosting models is probably the best alternative to chatgpt