13

I've been using TheBlokes Q8 of https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B, but now this one (https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B) I think is killing it. Has anyone else tested it?

you are viewing a single comment's thread
view the rest of the comments
[-] noneabove1182@sh.itjust.works 3 points 11 months ago* (last edited 11 months ago)

Hmm had interesting results from both of those base models, haven't tried the combo yet, will start some exllamav2 quants to test

What's it doing well at?

quant link for anyone who may want: https://huggingface.co/bartowski/OpenHermes-2.5-neural-chat-7b-v3-1-7B-exl2

[-] Stopwatch8200@lemmynsfw.com 2 points 11 months ago* (last edited 11 months ago)

I haven't tried neural-chat, but the combined model seems to be better (anecdotally) than OH2.5/Mistral at following instructions, reasoning, some of the overall quirks with llama.cpp seem to be ironed out with it too.

this post was submitted on 28 Nov 2023
13 points (93.3% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS