this post was submitted on 23 Feb 2026
583 points (97.6% liked)

Technology

81933 readers
2669 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the 'reasoning' models.

you are viewing a single comment's thread
view the rest of the comments
[–] SuspciousCarrot78@lemmy.world 1 points 3 days ago* (last edited 3 days ago) (1 children)

I hear you. Agreed.

Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn't have one foot on the break and the other on the accelerator.

I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say "I know what you were after, but here's the best IRL approximation".

Bijan did a fun review of Qwen 3-8 Josefied that's entertaining and explains the basic idea

https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0

[–] Iconoclast@feddit.uk 1 points 2 days ago (1 children)

Have you tried running your own local llm?

Nah, I've only messed around with ChatGPT and Grok. My interest in AI originates from the philosophical side of it - mainly the dangers and implications of creating AGI. I'm not tech-savvy enough for anything deeper - I even needed ChatGPT to walk me through installing Linux.

[–] SuspciousCarrot78@lemmy.world 1 points 2 days ago* (last edited 2 days ago)

It's super simple (if you want it to be)

https://www.jan.ai/

https://www.jan.ai/docs/desktop/quickstart

PS: You might like the thing I'm building too. The TL;DR premise is: what if you could make an LLM either tell the truth or lie loudly?

https://codeberg.org/BobbyLLM/llama-conductor