1471
you are viewing a single comment's thread
view the rest of the comments
[-] brucethemoose@lemmy.world 9 points 1 month ago* (last edited 1 month ago)

It's useful.

I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

It does "feel" different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

[-] brbposting@sh.itjust.works 3 points 1 month ago

Attractive. You got some pretty solid specs?

Rue the day I cheaped out on RAM. soldered RAMmmm

[-] brucethemoose@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Soldered is better! It's sometimes faster, definitely faster if it happens to be lpddr.

But TBH the only thing that really matters his "how much VRAM do you have," and Qwen 32B slots in at 24GB, or maybe 16GB if the GPU is totally empty and you tune your quantization carefully. And the cheapest way to that (until 2025) is a used MI60, P40 or 3090.

this post was submitted on 28 Oct 2024
1471 points (98.8% liked)

Technology

60070 readers
3684 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS