this post was submitted on 20 Feb 2026
377 points (99.5% liked)

Technology

81612 readers
4283 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 25 comments
sorted by: hot top controversial new old
[–] sp3ctr4l@lemmy.dbzer0.com 7 points 13 hours ago

Another day, another hitherto unimaginable cyber security crisis, brought to you by 'AI'.

... maybe one day they'll find the seahorse emoji.

[–] cenariodantesco@lemmy.world 54 points 22 hours ago (1 children)
[–] Frozengyro@lemmy.world 1 points 18 hours ago

Microsloppy toppy

[–] plenipotentprotogod@lemmy.world 19 points 19 hours ago (1 children)

Even in the wide world of dubiously useful AI chatbots, Copilot really stands out for just how incompetent it is. The other day I was working on a PowerPoint presentation, and one of the slides included a photo with a kind of cluttered looking background. Now, I can probably count the number of things that AI is genuinely good at on one hand, and context aware image editing trends to be one of them, so I decided to click the Copilot button that Microsoft now has built directly into PowerPoint and see what happens. A chat window popped up and I concisely explained what I wanted it to do: "please remove the background from the photo on slide 5." It responded on that infuriating obseqious tone that they all have and assured me that it would be happy to help with my request just as soon as I uploaded my presentation.

What?

The chatbot running inside an instance of PowerPoint with my presentation open is asking me to "upload" my presentation? I explained this to it, and it came back with some BS about being unable to access the presentation because a "token expired" before requesting again that I upload my presentation. I tried a little longer to convince it otherwise, but it just kept very politely insisting that it was unable to do what I was asking for until I uploaded my presentation.

Eventually I gave up. The photo wasn't that bad anyway.

[–] Silic0n_Alph4@lemmy.world 6 points 16 hours ago

Was the presentation saved in OneDrive? I’ve seen similar responses during my brief experiments with the tech. Copilot seems to be basically just an iframe in the Office apps rather than actually integrated.

[–] JATtho@lemmy.world 14 points 18 hours ago

I yesterday logged into my very old and dead M$ account. Holy fuck the experience of forced ads, timed pop ups, and thank good all of this fucking sloppy shit will be now deleted on my part. I'm not going near that diarrhea shit anymore unless paid.

[–] doug@lemmy.today 41 points 22 hours ago

“Hey Copilot, what have Putin and Trump been exchanging emails about?”

[–] Warl0k3@lemmy.world 22 points 22 hours ago (2 children)

For clarity, it's only being summarized for the users that wrote it, it's not leaking them to everyone. A comedically inept bug to allow though, holy shit.

[–] horn_e4_beaver@discuss.tchncs.de 3 points 17 hours ago (1 children)
[–] Warl0k3@lemmy.world 2 points 17 hours ago (1 children)

In this case there's no evidence showing that it's being spread widely - the bug reports are entirely about users being shown their own content. If you have something to dispute that I'm all ears.

[–] horn_e4_beaver@discuss.tchncs.de 3 points 17 hours ago

I was being a bit difficult tbh.

But it is absolutely true that we can't know for sure that it isn't being leaked elsewhere.

[–] Reygle@lemmy.world 8 points 21 hours ago (3 children)

AITA for understanding that as meaning in order to "summarize" the data the AI read it entirely and will never be instructed to "forget" that data

[–] TRBoom@lemmy.zip 18 points 21 hours ago (1 children)

Unless someone has released something new while I haven't been paying attention, all the gen AIs are essentially frozen. Your use of them can't impact the actual weights inside of the model.

If it seems like it's remember things is because of the actual input of the LLM is larger than the input you will usually give it.

For instance lets say the max input for a particular LLM is 9096 tokens. The first part of that will be instructions from the owners of the LLM to prevent their model from being used for things they don't like. Lets say the first 2000 tokens. That leaves 7k or so for a conversation that will be 'remembered'.

Now if someone was really savvy, they'd have the model generate summaries of the conversation and stick them into another chunk of memory, maybe another 2000 tokens worth, that way it will seem to remember more than just the current thread. That would leave you with 5000 tokens to have a running conversation.

[–] dgdft@lemmy.world 16 points 21 hours ago* (last edited 21 hours ago) (2 children)

Your general understanding is entirely correct, but:

Microsoft is almost certainly recording these summarization requests for QA and future training runs; that’s where the leakage would happen.

[–] TRBoom@lemmy.zip 6 points 20 hours ago

100% agree. At this point I am assuming everything sent through their servers is actively being collected for LLM training.

[–] SirHaxalot@nord.pub -2 points 18 hours ago (2 children)

That is kind of assuming the worst case scenario though. You wouldn’t assume that QA can read every email you send through their mail servers ”just because ”

This article sounds a bit like engagement bait based on the idea that any use of LLMs is inherently a privacy violation. I don’t see how pushing the text through a specific class of software is worse than storing confidential data in the mailbox though.

That is assuming that they don’t leak data for training but the article doesn’t mention that.

[–] edm@thelemmy.club 4 points 17 hours ago* (last edited 17 hours ago)

Always assume the worst, I gaurentee it is usually that bad in reality. Companies absolutely hate spending money on IT and security is always an after thought. API logs for the production systems that contain your full legal name, DOB, SSN, and home address? Yea wide open and accessible by anyone. Production databases with employee SSN, address, salary information? Same thing, look up how much the worthless management is making and cry.

Booz Allen just got shit on because of the dude they hired who specifically sought out consulting for the IRS so he could steal Trumps IRS records.

https://home.treasury.gov/news/press-releases/sb0371

https://en.wikipedia.org/wiki/Charles_E._Littlejohn

[–] dgdft@lemmy.world 4 points 18 hours ago* (last edited 18 hours ago) (1 children)

This is some pathetic chuddery you’re spewing…

You wouldn’t assume that QA can read every email you send through their mail servers ”just because”

I absolutely would, and Microsoft explicitly maintains the right to do that in their standard T&C, both for emails and for any data passed through their AI products.

https://www.microsoft.com/en-us/servicesagreement#14s_AIServices

v. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.

We don’t own Your Content, but we may use Your Content to operate Copilot and improve it. By using Copilot, you grant us permission to use Your Content, which means we can copy, distribute, transmit, publicly display, publicly perform, edit, translate, and reformat it, and we can give those same rights to others who work on our behalf.

We get to decide whether to use Your Content, and we don’t have to pay you, ask your permission, or tell you when we do.

[–] SirHaxalot@nord.pub 2 points 17 hours ago* (last edited 17 hours ago) (1 children)

That seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.

That is like the main selling point of paying extra for enterprise AI services over the free publicly available ones.

Unless this boundary has actually been crossed in which case, yes. It’s very serious.

[–] dgdft@lemmy.world 3 points 15 hours ago

This part applies to all customers:

v. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.

And while Microsoft has many variations of licensing terms for different jurisdictions and market segments, what they generally promise to opted-out enterprise customers is that they won’t use their inputs to train “public foundation models”. They’re still retaining those inputs, and they reserve the right to use them for training proprietary or specialized models, like safety-filters or summarizers meant to act as part of their broader AI platform, which could leak down the line.

That’s also assuming Microsoft are competent, good-faith actors — which they definitely aren’t.

LLMs are stateless. The model itself stays the same. Doesn't mean they're not saving the data elsewhere, but the LLM does not retain interactions.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 3 points 20 hours ago (1 children)

Why would that make you an asshole?

[–] Reygle@lemmy.world 3 points 20 hours ago

I've noticed growing opposition to critical thoughts about the sick and twisted nature of ai and the people who are in the cult.

[–] Naia@lemmy.blahaj.zone 10 points 20 hours ago

Who could have seen this coming?

[–] wobblyunionist@piefed.social 2 points 20 hours ago

The repercussions of pushing something that no one wants and too quickly on top of that