397
submitted 4 months ago by ooli@lemmy.world to c/world@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] arc@lemm.ee 13 points 4 months ago* (last edited 4 months ago)

I've disabled personalised ads on YouTube and I see this sort of shit all the time. I've given up reporting them because 90% of the time the report is rejected. I don't even understand the rationale for rejecting it because it's an obvious a scam as a scam can be - ai impersonation, fake endorsement, illegal advertising category. It's a scam YouTube.

I don't even get why these ads even appear. YouTube has transcription & voice / music recognition capabilities. How hard would it be to flag a suspicious ad and require a human to review it? Or search for duplicates under other burner accounts and zap them at the same time? Or having some kind of randomized audit based on trust where new accounts get reviewed more frequently by experienced reviewers.

[-] r00ty@kbin.life 12 points 4 months ago

No no. This kind of automated "protection" is only used against their users, who are their product. Not the advertisers, who are their customer!

[-] arc@lemm.ee 1 points 4 months ago

There are other considerations here though. Google suffers reputational harm if users become victims through their platform. It becomes news, it creates distrust in users, it generates friction with regulators and law enforcement. Users may be trained to be ad averse or install ad blockers. In addition, these ads generate reports which costs time to process even if the complaints are rejected.

At the end of the day these scammers are not high profile advertisers and they're not valuable. They're burner accounts that pay cents to deliver their ads. They're ephemeral, get zapped, reappear and constantly waste time and resources. Given that YouTube can easily transcribe content and watermark it, it makes no sense to me that they wouldn't put some triggers in, e.g. a new advertiser places an ad that says "Elon Musk", or "Quantum AI" or other such markers, flag it for review.

[-] CileTheSane@lemmy.ca 4 points 4 months ago

How hard would it be to flag a suspicious ad and require a human to review it?

Hard? No. But then humans would have to be paid which would slow down the growth of the dragon horde.

Better to have a computer analyze the ad that another computer thinks looks real.

[-] arc@lemm.ee 1 points 4 months ago

They have to have a human respond to each and every complaint about that ad. Seems more sensible to automate and flag suspicious ads before the complaints happen.

[-] LeroyJenkins@lemmy.world 3 points 4 months ago

they ain't gonna stop their customers from paying them more money

this post was submitted on 12 May 2024
397 points (97.6% liked)

World News

38705 readers
2380 users here now

A community for discussing events around the World

Rules:

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS