Of course, organisations that directly act to facilitate these effects are acting against the interest of humankind. Even still, they can only do so because the public give them permission to do it.
"AI" (LLMs, CNNs, deep learning and all the hype around them) supplanting much of genuine thought, burning fuel polluting the atmosphere and heating the world, privacy being encroached upon by invasive tracking, censorship and sterilization of the internet. I argue that these things happen because individual consumers (that is, the vast majority of people) permit them to happen. Companies generally act to maximise profit, but can only ever earn revenue if people are buying. And when a person buys a product, they vote with their wallet that they are OK with that product and its making.
It is usually easiest to go with the flow. To buy cage eggs because they are $3 cheaper than free range. It just makes economic sense. I argue that we should never do this. We must only ever buy the option that fits our ethical minimum, even at a higher cost.
One might consider that many people can't afford the "ethical" product, and will need to fall back on the cheaper option. I argue that they should go without. I have, in recent history, been in a position where I couldn't afford the classical weekly shop. I could have saved good money by buying cage eggs. Still, either I buy free range, or I go without and substitute with something else.
There is always an alternative. Don't like Facebook? Start a blog website. Don't want those new AI features in your preferred app? Uninstall it and use one that fits your needs. If it comes down to it, learn how to make your own. Want to slow climate change? Be conscious of your energy use and burn less fuel - swap your car for a bike, use public transport if you can. EVs can be good, too.
Yes, it will be inconvenient. It might be painful. But isn't that worth proving that your own principles mean something? That you are more than a consumer?
For most of the people who can read this, we always have a choice. Having lived through the last decades, we have learned the effect of complacency. We now know better, and must choose better.
This is basically another formulation of Immanuel Kant's categorical imperative, sometimes phrased:
Which is itself basically another version of the Golden Rule: do unto others as you would have them do unto you.
I've heard advocates for the Platinum Rule as well: Do unto others as they would have you do unto them.
My point is that even the great thinkers of history, in conversation with each other over the millennia, have not gotten over the hump of morality being fundamentally subjective yet some lust for justice keeps us arguing in favor of some version of objective, universal morals.
One of the more helpful tools I've found is John Rawl's veil of ignorance, also called the original position argument. Basically, if you were redesigning society around your new rules, but had no idea which position you would hold in the new society (it's randomly assigned or impossible to predict, in the thought experiment), would you consider your rules successful and your society valid?
This tool allows for an objective evaluation of many subjective points of view, through statistical means.
All of these tools fail in a particular way though, which is that they individualize the search for ideal behavior. They ask: What morals would be best in a perfectly designed society, of which I will be the architect. Perhaps no individual is capable of devising a universal system of behavior?
Locked into their subjective experience of the world, how could any individual operating within such a system gain the conceptual distance necessary to redesign the whole? Rather, we are all shaped and attempt to shape society, aided and resisted by our resources and allies, in a chaotic and turbulent system that we are incapable of existing outside of. Even with a plan for a universal morality, how could you possibly implement it without contradiction?