For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users' privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?
Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.
Really? Humans? Maybe even qualified humans? Huh! Never would've thought that.
Set your timers. We're going to hear about a non-ethical decision made by this system in 5, 4, 3, ...
I would think the same so-called AI that told us to eat rocks regularly, or that thinks it's still 2024, or that "hallucinates" other stuff will make conquering our planet harder. Particularly, if these aliens are unaware of the concept of deception.