598
submitted 2 months ago* (last edited 2 months ago) by RmDebArc_5@sh.itjust.works to c/technology@lemmy.world

Surprised pikachu face

you are viewing a single comment's thread
view the rest of the comments
[-] TriflingToad@lemmy.world 152 points 2 months ago

reminder, there are localy ran LLMs. Right now is a vital time for open source to fight against closed source in the AI arms race.

https://www.nomic.ai/gpt4all

[-] mayo@lemmy.world 39 points 2 months ago

Another good resource to help people find models https://llm.extractum.io

[-] Blaster_M@lemmy.world 20 points 2 months ago
[-] utopiah@lemmy.world 10 points 2 months ago

I like Ollama, and recommend it to tinker, but I admit this "LLM Explorer" is quite neat thanks to sections like "LLMs Fit 16GB VRAM"

Ollama just works but it doesn't help to pick which model best fits your needs.

[-] Knock_Knock_Lemmy_In@lemmy.world 2 points 2 months ago

pick which model best fits your needs.

What is the need I have to put the effort in to install all this locally. Websites win in terms of convenience.

[-] utopiah@lemmy.world 2 points 2 months ago

I don't think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?

[-] Knock_Knock_Lemmy_In@lemmy.world 1 points 2 months ago

I already have stable diffusion on a local machine. I was trying to find motivation to install a LLM locally. You answered my question in a different response

use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.

[-] morriscox@lemmy.world 2 points 2 months ago

I want to work on my stuff in peace and in private without worrying about a company grabbing my stuff and using it for themselves and to give/sell it to other outfits, including the government. "If you have nothing to hide..." is bullshit and needs to die.

[-] Knock_Knock_Lemmy_In@lemmy.world 1 points 2 months ago

Good point. Everything you feed into chatgpt is stored for future reference.

[-] T156@lemmy.world 9 points 2 months ago

At the same time, the trouble with local LLMs is that they're very resource heavy. Your average household computer isn't going to be able to run one with much usability or speed.

[-] floquant@lemmy.dbzer0.com 20 points 2 months ago

Which, you know, is fine. Maybe if people had an idea of how much power is required to run them, they would think twice before using a gigawatt to output a poem about farts, and perhaps even wonder how OpenAI can offer that for free. Btw, a 7b model should run ok on any PC with at least 16GB of RAM and a modern processor/GPU.

[-] RmDebArc_5@sh.itjust.works 3 points 2 months ago

Phi 3 can run on pretty low specs (requires 4gb RAM) and has relatively good output

[-] TriflingToad@lemmy.world 1 points 2 months ago

it's a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there's different models to play around with, some are faster but worse and some are smarter but slower

this post was submitted on 14 Sep 2024
598 points (97.8% liked)

Technology

59600 readers
2387 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS