I have no idea how alignment works. I can well imagine that it might make sense to train artificial intelligence using values that have been determined through democratic consultation with large population groups. But let's assume that the AI is already perfectly aligned. Could it be that it knows much better than humans themselves what is good for humans? If you ask a small child what it wants, it might say: a mountain of chocolate. But the parents would know very well that this mountain of chocolate is not good for the child. Another question, of course, is under what circumstances one could trust such an AI.
This question is also relevant to how we imagine an ideal future with artificial general intelligence/artificial super intelligence. If machines are eventually able to make much more thoughtful decisions than humans, would it still make sense to involve humans democratically in the decision-making process (assuming that AI would act in the interests of humanity)?
Perhaps a hybrid solution would be appropriate where there are several chat groups, some of which are public and others only used by closed groups within the community. I would favor such a decentralized approach where there is no central chat group for all participants, but rather people organize themselves into smaller groups.
A related question is whether chat groups that deal with company matters should be public and readable by everyone. On the one hand, transparency is good because this company is supposed to belong to everyone in the world, and therefore everyone should have the opportunity to follow the company's affairs. On the other hand, those who participate in a public chat group expose themselves. Sometimes it is better for people to have a protected environment in which they can express themselves freely without strangers being able to access what they have written.
It would probably be necessary to make certain information within the company accessible only to a small group of people who undertake to treat this knowledge confidentially. Perhaps it would make sense to have these people elected by the community. However, it must of course be ensured that these people are trustworthy.
Okay, I understand. You mean that with electronic elections, you can't have both anonymity and trust. I agree. Either you hold elections that are anonymous but could theoretically be manipulated, or you publish who voted for what, making the result verifiable. The decision between anonymity and trust is not an easy one. Here, we can discuss whether elections should be secret: https://lemmy.ml/post/38737498
One possibility would be to organize events in different locations around the world where digital identification codes are handed out to all participants, enabling them to vote online during the next time slot (e.g. 6 months). If the codes are handed out at the same time all over the world, it is not possible for one person to collect more than one code for a time slot.
Decidim is a digital open-source platform for citizen participation that can also be used for online voting: https://decidim.org/
That's right. It is probably necessary to create a database containing identification details for all participants. If such a database exists, it should either contain only information that cannot be used for identity theft, or it should be managed by a trustworthy authority that ensures this data is not misused.
That's true. When holding public elections, one should simultaneously try to create an environment in which voters with unpopular opinions are not suppressed.
The problem with secret ballots is that they are much easier to manipulate. If it is public who voted for what, it is not so easy to fabricate votes without it being noticed. Especially with online elections, which are easy to manipulate, I think that elections can be protected against fraud by making the voting process transparent.
Yes, I agree with you. I would also like to see the process of AI alignment be a democratic process that is regularly adjusted to reflect people's values.