this post was submitted on 20 Feb 2026
380 points (99.5% liked)
Technology
81612 readers
4957 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is some pathetic chuddery you’re spewing…
I absolutely would, and Microsoft explicitly maintains the right to do that in their standard T&C, both for emails and for any data passed through their AI products.
https://www.microsoft.com/en-us/servicesagreement#14s_AIServices
That seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.
That is like the main selling point of paying extra for enterprise AI services over the free publicly available ones.
Unless this boundary has actually been crossed in which case, yes. It’s very serious.
This part applies to all customers:
And while Microsoft has many variations of licensing terms for different jurisdictions and market segments, what they generally promise to opted-out enterprise customers is that they won’t use their inputs to train “public foundation models”. They’re still retaining those inputs, and they reserve the right to use them for training proprietary or specialized models, like safety-filters or summarizers meant to act as part of their broader AI platform, which could leak down the line.
That’s also assuming Microsoft are competent, good-faith actors — which they definitely aren’t.