[-] inspxtr@lemmy.world 16 points 1 month ago

tell me more about the “almost” part …

[-] inspxtr@lemmy.world 14 points 7 months ago

These are most, if not all, positive. I’d be curious if you can somehow make it list a mixture of positive and non-positive things before 2021 and then see where it goes from there.

[-] inspxtr@lemmy.world 15 points 8 months ago

Based on privacyguides suggestion page itself, SimpleX chat would be the next in line you can try.

Briar is only for Android AFAIK. Matrix/Element does offer E2EE chat/vid but, based on the page, it’s not recommended for long term sensitive use.

Regardless, with the current situation against encryption, any app that stays will be subject to similar conundrum about leaving/abiding the law like Signal. The ones abiding may need more scrutiny, of course.

[-] inspxtr@lemmy.world 17 points 10 months ago

I’m out of the loop here. I thought Cantonese is popularly spoken in China (and other parts of the world with Chinese immigrants/descendants). So even in China (like Guangdong), is Cantonese used very limitedly?

[-] inspxtr@lemmy.world 16 points 10 months ago

while I agree it has become more of a common knowledge that they’re unreliable, this can add on to the myriad of examples for corporations, big organizations and government to abstain from using them, or at least be informed about these various cases with their nuances to know how to integrate them.

Why? I think partly because many of these organizations are racing to adopt them, for cost-cutting purposes, to chase the hype, or too slow to regulate them, … and there are/could still be very good uses that justify it in the first place.

I don’t think it’s good enough to have a blanket conception to not trust them completely. I think we need multiple examples of the good, the bad and the questionable in different domains to inform the people in charge, the people using them, and the people who might be affected by their use.

Kinda like the recent event at DefCon trying to exploit LLMs, it’s not enough we have some intuition about their harms, the people at the event aim to demonstrate the extremes of such harms AFAIK. These efforts can help inform developers/researchers to mitigate them, as well as showing concretely to anyone trying to adopt them how harmful they could be.

Regulators also need these examples in specific domains so they may be informed on how to create policies on them, sometimes building or modifying already existing policies of such domains.

[-] inspxtr@lemmy.world 14 points 10 months ago* (last edited 10 months ago)

I have never hosted a bridge before so I may get things wrong. Please correct me where I do.

I assume this channel may be public, so any privacy concern needs to take that into account.

In terms of implementation, I was thinking that there would be 1 channel hosted on discord and 1 room on matrix being bridged together.

The benefit is that users of matrix and users of discord can participate in the same conversation without having to create an account on the other service. That way, matrix users don’t have to create a discord account or download discord app, which would be a good outcome in terms of privacy.

Edit: I have a faint memory that this is possible if the owners of the channel set it up on both ends. But I can’t find what the bridge is called, maybe it’s a different service from discord? Or I may have misunderstood things,

[-] inspxtr@lemmy.world 15 points 11 months ago

has anyone made a data request, especially GDPR, to confirm this?

[-] inspxtr@lemmy.world 19 points 11 months ago

This reminds me of an artistic experiment that I heard of from Lauren McCarthy about how she went on a date and was streaming the entire date for people to watch, as well as giving her things to say/do. I believe they ended up marrying. I might have butchered the description as I couldn’t remember the exact details. I wonder what folks think of the comparison with this.

[-] inspxtr@lemmy.world 15 points 11 months ago

lol there really is an xkcd for everything!

[-] inspxtr@lemmy.world 18 points 11 months ago* (last edited 11 months ago)

as much as I would really like that, that’s a catch-all statement that is not realistic. Unfortunately google has its claws in enterprises, universities and organizations all over the world, across so many domains.

I don’t believe “stop using” is good enough, as it seems only a very small minority realistically could.

This needs to be paired with proper legislation, like others have said, from EU as an example.

If you have friends/family with Google employees, please raise this issue up with them. This also needs to come internally as well, in addition to top-down processes from regulation.

[-] inspxtr@lemmy.world 17 points 11 months ago* (last edited 11 months ago)

I believe with humans, the limitations of our capacity to know, create, learn, and the limited contexts that we apply such knowledge and skills may actually be better for creativity and relatability - knowing everything may not always be optimal, especially when it is something about subjective experience. Plus, such limitations may also protect creators from certain claims about copyright, 1 idea can come from many independent creators, and can be implemented briefly similar or vastly different. And usually, we, as humans, develop a sense of work ethics to attribute the inspirations of our work. There are other who steal ideas without attribution as well, but that’s where laws come in to settle it.

On the side of tech companies using their work to train, ~~AI~~ gen tech is learning at a vastly different scale, slurping up their work without attributing them. If we’re talking about the mechanism of creativity, ~~AI~~ gen tech seems to be given a huge advantage already. Plus, artists/creators learn and create their work, usually with some contexts, sometimes with meaning. Excluding commercial works, I’m not entirely sure the products ~~AI~~ gen tech creates carry such specificity. Maybe it does, with some interpretation?

Anyway, I think the larger debate here is about compensation and attribution. How is it fair for big companies with a lot of money to take creators’ work, without (or minimal) paying/attributing them, while those companies then use these technologies to make more money?

EDIT: replace AI with gen(erative) tech

[-] inspxtr@lemmy.world 15 points 11 months ago

I’ve been seeing quite many older posts from 2-3 years ago popping up lately. Wonder why.

view more: ‹ prev next ›

inspxtr

joined 1 year ago