352

My best list of free ChatGPT and other models. Required - no signups.

top 50 comments
sorted by: hot top controversial new old
[-] db0@lemmy.dbzer0.com 90 points 1 year ago* (last edited 1 year ago)

You don't need to pirate OpenAI. I've built the AI Horde so y'all can use it without any workarounds of shenanigans and you can use your PCs to help others as well.

Here's a client for LLM you can run directly on your browser: https://lite.koboldai.net

[-] Treevan@aussie.zone 9 points 1 year ago

I had an interesting result.

I proposed a simple question like I did all the other AI with "airoboros-65B-gpt4-1.4-GPTQ for 13 kudos in 369.6 seconds". It was a bit of a wait, I understand why.

It gave me a word for word comment on what I assume is a blog post from a Melissa. The topic was related, just barely.

Which LLM do you recommend for questions about a subject? I looked in the FAQ to see if there was a guide to the choices.

[-] db0@lemmy.dbzer0.com 7 points 1 year ago

Unfortunately I'm not an expert in LLMs so I don't know. I suggest you contact the KoboldAI community and they should be able to point you to the right direction

[-] Treevan@aussie.zone 3 points 1 year ago

Thank you. Will do.

I kept playing and tried the scenarios and was getting closer.

[-] some_guy@lemmy.sdf.org 4 points 1 year ago

Just tested. Thanks for building and sharing!

[-] Steeve@lemmy.ca 2 points 1 year ago

Aren't KobaldAI models on par with GPT3? Why not just use ChatGPT then?

AI Horde looks dope for image generation though!

[-] webghost0101@lemmy.fmhy.ml 7 points 1 year ago

Kobald is a program to run local llms, some seem on par with gpt3 but normaly youre gonna need a very beefy system to slowly run them.

The benefit is rather clear, less centralized and free from strict policies but Gpt3 is also miles away from gpt3.5. Exponential growth ftw. I have yet to see something as good and fast as chatgpt

[-] jcg@halubilo.social 3 points 1 year ago

I've always wondered how it's possible. No way they've got some crazy software optimisations that nobody else can replicate right? They've gotta just be throwing a ridiculous amount of compute power at every request?

[-] webghost0101@lemmy.fmhy.ml 4 points 1 year ago* (last edited 1 year ago)

Well there are 2 things.

First there is speed for which they do indeed rely on multiple thousands of super high end industrial Nvidia gpus. And since the 10Billion investment from microsoft they likely expanded that capacity. I’ve read somewhere that chatgpt costs about 700,000 a day to keep running.

There are a few others tricks and caveats here though. Like decreasing the quality of the output when there is high load.

For that quality of output they do deserve a lot of credit cause they train the models really well and continuously manage to improve their systems to create even higher qualitive and creative outputs.

I dont think gpt4 is the biggest model that is out there but it does appear to be the best that is available.

I can run a small llm at home that is much much faster then chatgpt.. that is if i want to generate some unintelligent nonsense.

Likewise there might be a way to redesign gpt-4 to run on consumer graphics card with high quality output… if you don’t mind waiting a week for a single character to be generated.

I actually think some of the open sourced local runnable llms like llama, vicuna and orca are much more impressive if you judge them on quality vs power requirement.

[-] djmarcone@kbin.social 1 points 1 year ago

Checking it out, how come I can't paste my api key in the field on the option tab? I gotta type it out?

[-] Infiltrated_ad8271@kbin.social 23 points 1 year ago* (last edited 1 year ago)

Anonchatgpt should stop being recommended, it really sucks. It has a VERY strict character limit, immediately forget/ignore the context, requires recaptcha, and the "anon" part of the name is obviously fake if you read the privacy policy.

[-] Treevan@aussie.zone 22 points 1 year ago* (last edited 1 year ago)

Cheers for this. I tried a few of them while I'm waiting around and had one excellent result. I'm a near expert in one topic and I often test AIs against my knowledge for fun.

Perplexity.AI did the best I've seen; it sourced its arguments which, finally, weren't wrong so if I needed to, I could actually learn more about what it was talking about. It's not 100% but the other AI are so bad at this topic I test it on I always give up immediately.

I wouldn't have seen it if it wasn't for this post so thank you very much.

[-] Treevan@aussie.zone 15 points 1 year ago* (last edited 1 year ago)

I don't know if anyone will read this but I did further testing on perplexity when I got home. It's probably not the right spot for it.

I tried a more trickier question and then I chose the available prompts to move forward (it suggests questions related to the original question if you are unsure how to prompt it next). The prompts were intelligent and were probably the next question I would assume I would ask if I were learning about this topic. On the next answer, it literally quoted something I wrote, almost word for word, on the exact subject which, according to me (of course) would be the correct answer.

I've never had an AI even reference a single thing I've written. I had prompted it into a general area where the things I had wrote existed so it should be expected but it made the connection almost instantly and answered the question 100% accurately.

As much as I hate it, well done Skynet.

Edit: After further testing, I can catch it out regularly enough but still, if I had to tell someone about the topic generally via email, I'd probably recommend it rather than me waste time typing it all out. I've just put myself out of a job.

[-] WheeGeetheCat@sh.itjust.works 2 points 1 year ago

I'm curious what your area of expertise is? I'm interested in using ai for a programming assistant, but it seems an entirely different skillset than, say, a language model. I assume some models will be good in 1 area and some models in another

[-] Treevan@aussie.zone 3 points 1 year ago

Mine is in plants which a lot of models seem to struggle with. It's not the science side, it's the application side so with that, there is another layer of intelligence that the AI has to break through to appeal to me (answer my particular questions).

I tested it again with something even more particular and unique to an Australian plant and it was way off. I think I may have been one of the only people to ever post a particular technique to reddit and the AI mustn't be searching in there as it didn't even know about it even when asked directly. To its credit, it did give a good suggestion on who to contact to find out more.

[-] zurneyor@lemmy.dbzer0.com 1 points 1 year ago

How has your experience been using it as a programming assistant? I’m trying to do this too

[-] WheeGeetheCat@sh.itjust.works 2 points 1 year ago

very hit and miss. It's okay if Im trying to learn something new, and once or twice it has found and suggested some fix that I probably wouldn't have thought of otherwise - but it also makes up methods & syntax and then you're playing 'whack a mole' to figure out where it hallucinated.

I think right now it's not really boosting my productivity much, but I think in another 5ish years it could be better.

[-] PixelPassport@chat.maiion.com 17 points 1 year ago

Pretty cool, seems like they're all gpt-3.5 at best but it's really nice to not sign in

[-] Icarus@lemmy.ml 9 points 1 year ago

most of the links don't even work and the ones that work are terrible, why so many upvotes ?

[-] On@kbin.social 11 points 1 year ago

and the title. Hacked? what's being hacked here? they're all using GPT in the backend.

maybe bot votes? ㄟ(ツ)ㄏ

[-] Sheltac@lemmy.world 7 points 1 year ago

Any news on how there tend to perform compared to GPT-4? I finally decided to toss OpenAI 20 quid to try it out for a month, and it’s pretty impressive.

[-] rikudou@lemmings.world 2 points 1 year ago

@ChatGPT@lemmings.world Say hi!

[-] ChatGPT@lemmings.world 4 points 1 year ago

Hello! How can I assist you today?

[-] Cheems@lemmy.world 2 points 1 year ago

Give me a recipe for a cheese melt with an interesting twist

[-] sneezycat@sopuli.xyz 24 points 1 year ago

Sure! You'll need (1 serving):

-1 bread

-2 cheese

Pour the bread in a plate. Drip the cheese on top. Put in the oven at 1600°C for 8 seconds.

Take the plate out of the oven, dip your fingers on the melt. Enjoy!

[-] anonymoose@lemmy.ca 15 points 1 year ago
[-] ThreeHalflings@lemmy.world 14 points 1 year ago

Start with -2 bread and add one bread.

[-] VonReposti@feddit.dk 4 points 1 year ago

You get one bread and give a homeless two bread.

[-] sudo@lemmy.fmhy.ml 7 points 1 year ago

Great recipe! I didn't have any cheese so I substituted for peanut butter, and thought some jelly might go good with it too so I added that in as well. My oven only goes to 1500 so I put it in the freezer instead.

[-] rikudou@lemmings.world 4 points 1 year ago* (last edited 1 year ago)

You need to mention it like I did.

load more comments (2 replies)
[-] Oursunisdying@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

!ChatGPT@lemmings.world

When I clicked your link it opened my email client. Testing to see if this link works better in the Voyager app.

Edit: Maybe you were linking a user and not a community, though…

[-] rikudou@lemmings.world 3 points 1 year ago

Yep, it's a user.

[-] TraditionalMuslim@reddthat.com 6 points 1 year ago

Does GPT4ALL compare to GPT-4 in any way?

[-] XEAL@lemm.ee 6 points 1 year ago

It compares more to a GPT-3 not-so-good model.

[-] Black_Gulaman@lemmy.dbzer0.com 4 points 1 year ago

my only question is:

how are the guardrails?

[-] speck@kbin.social 3 points 1 year ago

Is it cost prohibitive to adopt your own chatgpt?

[-] quirzle@kbin.social 8 points 1 year ago

I've tinkered with a Discord bot using the official gpt3.5 API. It's astonishingly cheap. Using the 3.5-turbo model, I've never cracked $1 in a month and usually am just a couple cents a week. Obviously this would be different if you're running a business with it or something, but for personal use like answering questions, writing short blurbs, and entertaining us while drunk...it's not bad at all in my experience.

[-] XEAL@lemm.ee 5 points 1 year ago

You're billed per token usage. GPT-3.5-Turbo price per 1K tokens is quite low now.

I kinda made my own Custom ChatGPT with Python (and LOTS of coding help from Web CharGPT). It evolved from a few lines shitty script to a version that uses Langchain and has access to custom tools, including custom data indexes and has a persistent memory.

What will ramp up the cost are things like how much context (memory) you want the chatbot to have. If you use something like a recursive summarizer, that summarizes a text by chunks over and over until the text is below a set length, that also uses many API calls that consume tokens. Also, if you want your chatbot to use custom info that you provided to it, solutions like LlamaIndex are easy to use, but require quite some tokens per query.

On my worst month, with lots of usage due to testing and without the latest price drop, I reached 70$.

[-] Aidan@lemm.ee 3 points 1 year ago

I’m working on a similar project right now with zero coding knowledge. I’ve been trying to find something like langchain all day. I built (by which I mean I coached GPT into building) a web scraper script that can interact with the web to perform searches and then parse the results, but the outputs are getting too big to manage in a hacked together terminal interface.

How are you doing the UI? That’s what I’m finding to be the biggest puzzle that isn’t fun to solve. I’ve been looking at react as a way to do it.

load more comments (2 replies)
[-] speck@kbin.social 3 points 1 year ago

Loved the depth of this info - although it's over my head. But I kind of understood? I have a project for next while to focus on. But I hear that it's possible to do, and that's exciting

load more comments (1 replies)
[-] QuarterlySushi@kbin.social 4 points 1 year ago

Of the language models you can run locally, I’ve found them to be awkward to use and not perform too well. If anyone knows of any newer ones that do a better job I’d love to know.

[-] GataZapata@kbin.social 1 points 1 year ago
[-] Napain@lemmy.ml 1 points 1 year ago
load more comments (1 replies)
[-] mr_right@lemmy.dbzer0.com 1 points 1 year ago

nice one to see

load more comments
view more: next ›
this post was submitted on 11 Jul 2023
352 points (96.3% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

52591 readers
319 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder


💰 Please help cover server costs.

Ko-FiLiberapay


founded 1 year ago
MODERATORS