this post was submitted on 14 Mar 2026
104 points (97.3% liked)

Technology

82621 readers
3012 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 25 comments
sorted by: hot top controversial new old
[–] neclimdul@lemmy.world 6 points 4 hours ago (1 children)

A lot of times I feel like its more than lazy, its rude.

Either its something I'm supposed to know and you think I'm dumber than chatgpt or to dumb to look it up myself.

Or it's something you're supposed to know and don't think I'm worth the time to give me your opinion.

Either way, feels like a fuck you.

[–] Psythik@lemmy.world 0 points 3 hours ago* (last edited 3 hours ago) (1 children)
[–] XeroxCool@lemmy.world 1 points 1 hour ago

Two dumb

To serious

[–] YetAnotherNerd@sopuli.xyz 55 points 9 hours ago (2 children)

I’m getting that more and more. “I asked ChatGPT and it said”. Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

Make sure they know they just lost input right ms the next time. No, I don’t ask Harry, he just quoted GPT last time, and I’d already asked it this time so there was no reason to involve him. Nothing worse for a lead than people not wanting them to lead because they’ve abdicated the job to spicy autocorrect.

[–] AliasAKA@lemmy.world 2 points 4 hours ago

I think this is the way. A certain number of times of “[coworker] wasn’t asked because they only respond with LLMs, so I just ask the LLMs directly. I am not sure what [coworker]’s expertise is anymore, I just don’t consult them” and I suspect coworker may in fact stop responding with LLMs.

[–] Zos_Kia@jlai.lu 12 points 8 hours ago (1 children)

Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

To me it's like sending the "let me google that for you" link to answer a question. It's just bad form. I don't want your whole reasoning trace man, i just want to know what you understand of it and maybe you'll catch some detail i'm missing or whatever. It's simple, i won't read LLM output, my colleagues know it and i get shit for it but no i am not digesting this material for you. Give me a 3 bullet-point version in your own words, the point is not just in the data exchange it's also to make sure you are aware of the answer and we have a common truth.

Or failing that, just give me the fucking prompt and at least i'll know if you understand the question.

[–] ulterno@programming.dev 1 points 5 hours ago

Or failing that, just give me the fucking prompt and at least I’ll know if you understand the question.

This one's really nice. I should make this my go to response to anyone doing that.

[–] d00ery@lemmy.world 11 points 7 hours ago* (last edited 7 hours ago) (3 children)

Someone literally copy and pasted a whole ChatGPT comment in an email reply to some questions I'd asked them. I was somewhat insulted.

[–] NekoKoneko@lemmy.world 12 points 4 hours ago

You're right to feel insulted. LLMs are verbose and unreliable often enough that you have to check any work that comes out (or be negligent).

So what's usually happening is someone is saving their time by spending yours. They saved the time normally needed to write a thoughtful reply by shifting the time and cognitive cost of reading and verifying to you, with AI as an excuse (often not without condescension, which is a type of "virtue signaling" driven by c-suite AI boosting). The slop output looks like "work product," but is neither - it took no work and is a facade of a "product" because it's unverified.

They are being selfish, and it is objectively an insulting act.

[–] Armok_the_bunny@lemmy.world 0 points 3 hours ago

Put them on a list where any and every email they send you gets fed into GPT and replied to without you ever reading it, then to make sure they know that explain what's happening in the signature.

[–] EncryptKeeper@lemmy.world 3 points 6 hours ago (1 children)

I got this response from a 70+ Catholic Priest. Quite literally nothing in this world is sacred or real anymore.

[–] ulterno@programming.dev 4 points 5 hours ago

Considering that despite going over lvl70, he decided with Catholic Priest instead of Saint,Warlock or Archmage, it should already be making you question his decision making ability.

[–] RegularJoe@lemmy.world 9 points 10 hours ago

ChatGPT isn’t on the team.

Except that when someone pastes “ChatGPT thinks that {wall of AI-generated text}”

That person put ChatGPT on the team. And if there was no human input, the competition is free to use that and mock it word for word. Use fear, uncertainty, and doubt to convince your team that anyone can use that, including your competition, if it is published.

The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection.

https://natlawreview.com/article/copyright-offices-latest-guidance-ai-and-copyrightability

[–] jbloggs777@discuss.tchncs.de 0 points 9 hours ago (2 children)

Sure... copy & paste is copy & paste.

However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.

I am happy if someone uses AI first to come up with a coherent message, bug report, or question.

I am annoyed if it's ill-researched/understood nonsense, AI assisted or not.

Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.

[–] magnetosphere@fedia.io 11 points 7 hours ago (3 children)

Until they solve the AI hallucination problem, I’ll never be able to trust it.

[–] frongt@lemmy.zip 2 points 6 hours ago (2 children)

It's a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it's called).

[–] Truscape@lemmy.blahaj.zone 3 points 6 hours ago* (last edited 6 hours ago)

I believe it's just complexity and token/compute usage.

You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

It's also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

[–] magnetosphere@fedia.io 1 points 6 hours ago

I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.

[–] ulterno@programming.dev 1 points 5 hours ago (1 children)

That doesn't seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.

[–] magnetosphere@fedia.io -1 points 5 hours ago (1 children)

Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.

If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.

[–] ulterno@programming.dev 0 points 3 hours ago

AI is pretty much possible, we are thinking about it the wrong way.

We are expecting AI to have the 3 bests of both worlds.

  • High I/O ability : we have that from computers
  • Determinism and Correctness : computers always had a high level determinism, never correctness because a computer does not know what is correct^[this boils down to the same thing that one person once said to some computer guy - 'If I enter the wrong numbers, will I still get the correct answer?']
  • Intelligence and thought : intelligence is a perception. AI will always have a lower depth of thought than us as long as it is dependent upon us

So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.

Of course, I don't mean "determinism" in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can't get the kind of logical determinism that you expect from normal computer operations.
A dumbed down example to get my thoughts across: You can use either of a + b or ADD(A,B) or SUM(A:B) and will still get the same result.

[–] jbloggs777@discuss.tchncs.de -4 points 6 hours ago

Nobody says to blindly trust it...

[–] prex@aussie.zone 4 points 7 hours ago (1 children)

...fact check claims

Risky use-case. Besides, why bother when you have to fact check the fact checker.

[–] jbloggs777@discuss.tchncs.de 0 points 6 hours ago* (last edited 6 hours ago)

It is about respecting everyone's time...

Example, if an executive were to claim: "We don't have any solution to X in the company" in an email as a justification for investment in a vendor, it might cost other people hours as they dig into it. However, if AI fact-checked it first by searching code repos, wikis and tickets, found it wasn't true, then maybe that email wouldn't have been sent at all or would have acknowledged the existing product and led to a more crisp discussion.

AI responses often only need a quick sniff by a human (eg. click the provided link to confirm)... whereas BS can derail your day.

We should share our knowledge and intelligence with AIs and people alike, and not ignorance. Use the tools at our disposal to avoid wasting others' valuable time, and encourage others to do the same.