this post was submitted on 14 May 2026
237 points (95.8% liked)

Technology

84699 readers
5726 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] demonsword@lemmy.world -4 points 2 days ago (3 children)

The “correct” way to use AI for coding (and anything really) is to ask for explanations / tutorials when you can’t find one online, then learn from that.

except the "explanation" frequently will be 100% "hallucinated" bullshit

[–] bilb@lemmy.ml 0 points 19 hours ago* (last edited 19 hours ago) (1 children)

For what it's worth, I've been working on (yet another) ActivityPub based micro blogging application and LLMs have been enormously helpful and so far as I can tell, correct. Often it cites the AP specs and its extensions, as well as specific implementations from existing major AP apps. It can show me expected outputs, what responses from my app should look like in response to different requests from other servers, and quickly give context for features like Mastodon's shared inbox. I'm not having it simply generate code, but I think I'm still moving way faster than I otherwise could. I don't recall it ever giving me incorrect information.

It's the first time I've used an LLM as a tool this way, and I'm pretty impressed with it. I'm using the assistant made available through Kagi.

[–] MangoCats@feddit.it 1 points 17 hours ago (1 children)

Often it cites the AP specs and its extensions

Tip: check those citations yourself before publishing with your name on the product. Yeah, they're usually correct - do you only usually not want to be perceived as a lazy idiot?

[–] bilb@lemmy.ml 1 points 25 minutes ago

I get that. I wouldn't publish the code anywhere until an alpha is more or less ready and pretty well tested, and yes, I understand the importance of making sure it behaves in an expected, performant and pro-social manner with the existing compatible fediverse apps.

I'm not too worried about it, but thanks for your genuine concern about my reputation. ;) Since I'm the one writing the code, I'm more worried about the quality of that, if anything.

[–] takeda@lemmy.dbzer0.com 0 points 1 day ago (1 children)

People say the best way to see this is asking AI about subject you're expert of.

This is not always possible, I had people who said "but I'm not expert at anything". Another way is to ask them about yourselves. For example if you have reddit account that is has some age, Gemini has deal with reddit and feeds them everything that's posted. First response might even look good, but continue talking (as it is getting more ridiculous), don't try correct, you can see how it is making shit up.

Since they are feeding it with everything lemmy might also work.

[–] MangoCats@feddit.it 1 points 17 hours ago (1 children)

I've seen very mixed results depending on which model I'm using. The newer ones, since about November of 2025, have been getting significantly better - but some of the "free class" tools are still using older ones today.

Free Gemini gave me extremely ridiculously bad advice about how to get through a traffic jam today. Free Gemini also drew the crudest sketch imaginable for a prompt, same prompt fed to ChatGPT yielded a really nice quality cartoon panel of basically exactly everything in the prompt, with some nice/appropriate embellishments.

[–] FaceDeer@fedia.io 1 points 15 hours ago (1 children)

I've become rather disillusioned with Gemini's use of search tools lately. It's odd given that it's a Google model, you'd think Google would be at the top of the search engine game. But honestly, Deepseek's been my go-to lately when I want an answer that's likely to be synthesized from a lot of web searches. I've had it search over a hundred different pages for a generic "how does this work?" Sort of query. It didn't read them all, but it's casting a wide net and it's letting me actually see the details. Gemini seems more willing to just tell me what it "thinks" the answer to a question is based off of its training data, which is not a particularly reliable thing for an LLM to do.

[–] MangoCats@feddit.it 2 points 15 hours ago

Gemini seems more willing to just tell me what it “thinks” the answer to a question is based off of its training data, which is not a particularly reliable thing for an LLM to do.

Yeah. I pay for Claude, my company pays even more for Cursor, so comparing them to free Gemini probably isn't fair.

Gemini is very useful for offhand queries while Claude is chewing on a bigger problem, but if it's something that needs complex analysis and/or extensive research... the tools that let you build up a folder full of files related to the task are vastly superior to chatbots. Gemini does have a Claude Code command line tool that does that kind of development in a folder, I didn't install it until last week. Gave it a coding problem to work on (lookup realtime weather radar data from NOAA, present recent data on a map on a webpage)... it sort of succeeded, but with poor user experience. Again, I'm in "Free mode" which can do quite a bit on a day's allowance of tokens, but... I don't feel like their paid modes would be particularly higher quality. If they are, they're doing themselves a tremendous disservice by demoing such substandard performance in free mode.

[–] UltraBlack@lemmy.world 1 points 1 day ago (2 children)

That's why I always ask it to cite sources. Basically googld ATP since google is turning to shit and all other search engines still aren't quite as good

[–] ev1lyn@lemmy.world 1 points 1 day ago

Then why not ask just for the sources and read them yourself?

[–] frongt@lemmy.zip 0 points 1 day ago (1 children)

It could very easily use a completely different or hallucinated source.

But a lot of LLM products are now providing source links right in the response. I've found them useful, and hopefully they aren't produced just by feeding the text back in and asking for a link.

[–] SpaceNoodle@lemmy.world 1 points 20 hours ago

That's exactly how those links are produced.