401
submitted 1 year ago by ijeff to c/technology@lemmy.world
all 24 comments
sorted by: hot top controversial new old
[-] Fredselfish@lemmy.world 97 points 1 year ago

Wow pocket change. Why fines always so little. This be like fining the average person 3 cents. This will not stop them from just doing it again.

Needs to be a large percentage of thier gross wealth. Musk one riches men in the world and fine him billions.

[-] Stovetop@lemmy.world 37 points 1 year ago

Elon is just going to lay off one more engineer to make up the difference and then some.

[-] McJonalds@lemmy.world 9 points 1 year ago

cool. lets see his company crumble under his poor choices

[-] Edgelord_Of_Tomorrow@lemmy.world 21 points 1 year ago

The fines start small, it's the same in the EU. Then they get bigger until you're being threatened with 40% of worldwide revenue.

[-] ram@bookwormstory.social 43 points 1 year ago

Wow that's over 50% of the platform's current value!

[-] Pxtl@lemmy.ca 23 points 1 year ago

Remember when all the Musk fanboys were claiming that Musk cleaned up the CSAM and anybody who opposed him was obviously a pedophile? Pepperidge Farm remembers.

[-] jaybone@lemmy.world 1 points 1 year ago

He saved our children from the cave rescue menace.

[-] iamtheplatypus@lemmy.world 9 points 1 year ago

Musk won’t pay — like all other fines (invoices, rent), he commands his CEO to ignore them. He’ll say it’s “too high.”

[-] autotldr@lemmings.world 6 points 1 year ago

This is the best summary I could come up with:


SYDNEY, Oct 16 (Reuters) - An Australian regulator has fined Elon Musk's social media platform X A$610,500 ($386,000) for failing to cooperate with a probe into anti-child abuse practices, a blow to a company that has struggled to keep advertisers amid complaints it is going soft on moderating content.

Though small compared to the $44 billion Musk paid for the website in October 2022, the fine is a reputational hit for a company that has seen a continuous revenue decline as advertisers cut spending on a platform that has stopped most content moderation and reinstated thousands of banned accounts.

Most recently the EU said it was investigating X for potential violation of its new tech rules after the platform was accused of failing to rein in disinformation in relation to Hamas's attack on Israel.

"If you've got answers to questions, if you're actually putting people, processes and technology in place to tackle illegal content at scale, and globally, and if it's your stated priority, it's pretty easy to say," Commissioner Julie Inman Grant said in an interview.

Under Australian laws that took effect in 2021, the regulator can compel internet companies to give information about their online safety practices or face a fine.

Inman Grant said the commission also issued a warning to Alphabet's (GOOGL.O) Google for noncompliance with its request for information about handling of child abuse content, calling the search engine giant's responses to some questions "generic".


The original article contains 625 words, the summary contains 239 words. Saved 62%. I'm a bot and I'm open source!

[-] blazera@kbin.social 3 points 1 year ago

Is the fediverse doing anything better?

[-] squiblet@kbin.social 17 points 1 year ago* (last edited 1 year ago)

Fediverse is a bunch of independent websites potentially connected by compatible software, not one entity, so there’s not really a basis for comparison. You could ask about individual instances. But also it’s about “failing to cooperate with a probe into anti-child abuse practices”, not hosting or failing to moderate material. Australian law says they can asks sites about their policies and they have to at least respond.

[-] jaybone@lemmy.world 4 points 1 year ago

Does responding with the poop emoji count?

[-] blazera@kbin.social 1 points 1 year ago

The article has their response. Given their warning to google as well, apparently the responses also have to be good enough for them.

[-] squiblet@kbin.social 6 points 1 year ago

They said

X's noncompliance was more serious, the regulator said, including failure to answer questions about how long it took to respond to reports of child abuse, steps it took to detect child abuse in livestreams and its numbers of content moderation, safety and public policy staff.

So yes, all the questions need to be at least addressed and probably saying “we don’t do that because Elron doesn’t care about it” wouldn’t suffice either.

[-] blazera@kbin.social -2 points 1 year ago

Cool, see my first comment again

[-] squiblet@kbin.social 1 points 1 year ago

You mean, is the Fediverse doing any better? Why would I need to read that again?

[-] blazera@kbin.social -3 points 1 year ago

because we've gone in a circle of me asking if the site we are on right now is doing anything better with regards to this problematic material, since folks seem to care about Twitters failure to address it themselves. You respond that it's not about their lack of addressing the material, but they're lack of a response to the regulatory inquiry. I point out that they did respond, and your response is that oh they actually need to have a good answer of how they are addressing the material. Which is the same premise as the article and what my first comment was about. It's hypocrisy, because the standard isnt being applied to the fediverse, no one is up in arms about our lack of automatic detection of problematic material or surveillance of private messaging. Because we care about privacy when we're not being blinded by well intentioned Musk hate.

[-] squiblet@kbin.social 4 points 1 year ago* (last edited 1 year ago)

I posted from the article that they didn't respond to several questions:

X's noncompliance was more serious, the regulator said, including failure to answer questions about how long it took to respond to reports of child abuse, steps it took to detect child abuse in livestreams and its numbers of content moderation, safety and public policy staff.

I speculated that probably they also need adequate responses, but that's not what the article or the fine is about.

If one of the individual sites in the Fediverse was asked by Australian regulators, I bet they'd respond fully. It's not quite the same situation as Twitter, either - none of these sites are large enough to require many staff members, and don't have their own live streaming platform.

[-] doctorn@r.nf 7 points 1 year ago* (last edited 1 year ago)

Since there is no hierarchical top general moderator/admin and every instance is under supervision by the respective owners of these instances, responsibility of safety is technically forwarded to individual instance admins as far as their instance goes. Or that's what I make of it at least, anyone feel free to correct me if I'm wrong. Also, the above conclusion does not include any possible random future law made up to state differently (decision-making entities have weird unpredictable logics... 😅)

As far as for Mastodon itself, it could use some upgrades in its user management and reporting features, though (an option to automate instant reactions (like tempban until reviewed) on certain categories of reports (like child abuse and extreme/shocking violence) to prevent anyone reported for those kinds of things actively being able to continue until an admin sees and processes the report and reports are definitely not visible enough yet).

[-] blazera@kbin.social 3 points 1 year ago

And things like automatic detection and direct message surveillance like these regulators are asking for?

[-] doctorn@r.nf 1 points 1 year ago

Well, if those become necessary I'll just have to add Mastodon, along with anything known too well, in the bin for government-ruined software and start using hidden services... I will never willfully comply to spyware, not even (read: especially not) government-approved ones.

I have no idea if Mastodon has any plans adding those to the instance software though... Probably will if they get lawfully obligated I suppose, but I still sincerely hope not (as I still also sincerely hope this proposal gets dismissed for the obvious contradicting privacy laws it breaks and the vulnerabilities of backdoors).

[-] abhibeckert@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

"The Fediverse" is about 13,000 separate services that are each individually responsible for illegal content on their systems. Some probably aren't doing a good enough job, but most of them are and they've mostly defederated the ones that fail to do so.

And why wouldn't they? Many hands make light work and the fediverse has tens of thousands of moderators to deal with far fewer posts that the X network. Twitter had a decent moderation team once, but Musk has gutted the team.

this post was submitted on 15 Oct 2023
401 points (96.1% liked)

Technology

59456 readers
3952 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS