this post was submitted on 30 Dec 2025
917 points (98.8% liked)

Technology

78154 readers
1351 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SleeplessCityLights@programming.dev 95 points 2 days ago (6 children)

I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how "smart" a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

[–] markovs_gun@lemmy.world 12 points 1 day ago (1 children)

I legitimately don't understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it's some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don't bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don't know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they're extremely prolific writers, but it's simply wrong to reply "Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark" because this view hasn't been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash'ari Sunni Muslims and not the general scholarly consensus.

[–] echodot@feddit.uk 4 points 1 day ago (1 children)

I mean. I've used AI to write my job mandated end of year self assessment report. I don't care about this, it's not like they'll give me a pay rise so I'm not putting effort into it.

The AI says I've lead a project related to windows 11 updates. I haven't but it looks accurate and no one else will be able to dell it's fake.

So I guess the reason is they are using the AI to talk about subjects they can't fact check. So it looks accurate.

[–] jtzl@lemmy.zip 1 points 1 day ago

They're really good.*

  • you just gotta know the material yourself so you can spot errors, and you gotta be very specific and take it one step at a time.

Personally, I think the term "AI" is an extreme misnomer. I am calling ChatGPT "next-token prediction." This notion that it's intelligent is absurd. Like, is a dictionary good at words now???

[–] hardcoreufo@lemmy.world 27 points 2 days ago (5 children)

Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I'm asking for better than a search engine. The rest of the time it runs me in circles that don't work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

[–] PixelPinecone@lemmy.today 3 points 1 day ago (1 children)

I pay for Kagi search. It’s amazing

[–] hardcoreufo@lemmy.world 1 points 6 hours ago

I do too. Its pretty good but I feel not as good as search engines used to be. Though through no fault of its own. I just think garbage sites have paid for SEO and clog up results no matter what.

[–] BarneyPiccolo@lemmy.today 9 points 2 days ago (2 children)

I usually skip the AI blurb because they are so inaccurate, and dig through the listings for the info I'm researching. If I go back and look at the AI blurb after that, I can tell where they took various little factoids, and occasionally they'll repeat some opinion or speculation as fact.

[–] vaultdweller013@sh.itjust.works 2 points 1 day ago (1 children)

At least fuck duck go is useful for video games specifically, but that one more or less just copy pasted from the wiki, reddit, or a forum shits the bed with EUV specifically though.

[–] Typhoon@lemmy.ca 11 points 1 day ago (1 children)

fuck duck go

This is the one time in all of human history where autocorrecting "fuck" to "duck" would've been correct.

Worst part is I'm pretty sure it autocorrected duck to fuck cause I've poisoned my phones autocorrect with many a profanities.

Usually the blurb is pure opinion.

[–] MrScottyTay@sh.itjust.works 12 points 2 days ago (1 children)

It's fucking awful isn't it. Summer day soon when i can be arsed I'll have to give one of the paid search engines a go.

I'm currently on qwant but I've already noticed a degradation in its results since i started using it at the start of the year.

[–] Holytimes@sh.itjust.works 7 points 1 day ago* (last edited 1 day ago)

The paid options arnt any better. When the well is poisoned it doesn't matter if your bucket is made of shitty rotting wood, or the nicest golden vessel to have graced the hands of a mankind.

Your getting lead poisoning either way. You just get to give away money for the privilege with one and the other forces the poisoned water down your throat faster.

[–] ironhydroxide@sh.itjust.works 7 points 2 days ago

Agreed. And the search engines returning AI generated pages masquerading as websites with real information is precisely why I spun up a searXNG instance. It actually helps a lot.

[–] SocialMediaRefugee@lemmy.world 4 points 1 day ago (1 children)

I've asked it for a solution to something and it gives me A. I tell it A doesn't work so it says "Of course!" and gives me B. Then I tell it B doesn't work and it gives me A...

[–] hardcoreufo@lemmy.world 1 points 2 hours ago

I feel like I go through the whole alphabet of options before giving up and rtfming.

I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. "Notice how it never says where it is taking place? Notice how they never give any specific names?" Fortunately she eventually agrees with me but I feel like I'm teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.

[–] SocialMediaRefugee@lemmy.world 9 points 1 day ago (1 children)

The results I get from chatgpt half the time are pretty bad. If I ask for simple code it is pretty good but ask it about how something works? Nope. All I need to do is slightly rephrase the question and I can get a totally different answer.

[–] MBech@feddit.dk 1 points 1 day ago

I mainly use it as a search engine, like: "Find me an article that explains how to change a light bulb" kinda shit.