6
submitted 2 months ago by EndOfLine@lemmy.world to c/asklemmy@lemmy.world

I have been using ChatGPT because it was the big name early on and I have never really looked into any alternatives. With the rapid growth of AI assisted services, I am curious to hear what others are using.

top 16 comments
sorted by: hot top controversial new old
[-] BOFH666@lemmy.world 9 points 2 months ago

Asked several to write a c implementation of some basic networking stuff.

ChatGPT: needed to refine my input, got reasonable output. Complete answers, just compile and run.

Google: the output was just a few snippets, nothing to be used as-is.

MSFT: terrible output, and -no suprise here- the compiled code crashed with null pointer references etc. The worst answers ever.

For simple problems (programming low-level microcontrollers), my go to will be ChatGPT everytime.

Google should get it's act together, Microsoft can exit the stage.

[-] Hawk@lemmynsfw.com 2 points 2 months ago

Phi3 is pretty good for the size of the model!

Also subs of the Microsoft libraries used to train models are quite good.

Oh and copilot, whether you like it or not, it's quite a technical achievement in terms of response time and accuracy.

[-] TootSweet@lemmy.world 7 points 2 months ago
[-] Susaga@sh.itjust.works 3 points 2 months ago* (last edited 2 months ago)

Yeah, this question is like being asked "what's your favourite STI". They're all unpleasant, so I'd rather not have any.

[-] tee900@lemmy.world 0 points 2 months ago

Why do you say that? Chatgpt has been incredibly useful for me.

[-] FatCat@lemmy.world -5 points 2 months ago

there are no good opinions you have

[-] Rhaedas@fedia.io 7 points 2 months ago

I'm sure many don't have the hardware to run local, but for most things that will probably work just as well as the full models, plus you can modify them and experiment. Start with Ollama as the base to run them, and see what works best. I tend to primarily use the edited uncensored versions of llama3 like the Neural Daredevil variations.

But just remember at any model's base, even the biggest and best, they are at the core a predictor. This works great for some uses, not so well for others. Don't use a screwdriver for a hammer...at least not until they merge them to be able to do both well.

[-] can@sh.itjust.works 1 points 2 months ago

I've tried to get this on my phone but always hit a roadblock.

[-] jelloeater85@lemmy.world 6 points 2 months ago

No one mentioned Phind or Perplexity, both are niiice.

[-] 0x01@lemmy.ml 4 points 2 months ago

Llama3 local is pretty good

[-] fubarx@lemmy.ml 2 points 2 months ago

I've been using ChatGPT, specialized ones on Huggingface, and a bunch of local ones using ollama. A colleague who is into this deep says Claude is giving him best results.

Thing is, depends on the task. For coding, I've found all suck. ChatGPT gets you up to a point, then puts out completely wrong stuff. Gemini, Microsoft, and CodeWhisperer put out half-baked rubbish. If you don't already know the domain, it will be frustrating finding the bugs.

For images, I've tried DALL-E for placeholder graphics. Problem is, if you change a single prompt element to refine the output, it will generate completely different images with no way to go back. Same with Adobe generators. Folks have recommended Stability for related images. Will be trying that next.

Most LLMs are just barely acceptable. Good for casual messing around, but I wouldn't bet the business on any of them. Once the novelty wears off, and the CFOs tally up the costs, my prediction is a lot of these are going away.

I only use the ones I self host

[-] DirigibleProtein@aussie.zone 2 points 2 months ago
[-] Hawk@lemmynsfw.com 1 points 2 months ago

Codestral and Yi:34b, are pretty good.

[-] 0x30507DE@lemmy.today 1 points 2 months ago

I do a lot of incredibly specific VHDL and 45GS02 asm, so the answer is none.

Even if I didn't do obscure things with obscure languages, answer'd still be none, because I'd rather spend a few hours learning what the code does and how to use it, instead of "just hope the output runs" while not knowing what and why it's trying to do what it's doing.

[-] HubertManne@moist.catsweat.com 1 points 2 months ago

like any other technology whatever is standard or convenient. Not super wild about them in relation to how the rest of the internet and technology has been going.

this post was submitted on 03 Sep 2024
6 points (57.1% liked)

Ask Lemmy

26672 readers
1942 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics.


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS