this post was submitted on 13 Jul 2023
31 points (100.0% liked)

Technology

30 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn't actually what you want, then what's your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it's likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we're only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I'm posting this in a hostile space, and I'm sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that's fine (the jury is literally still out on that). What I'm interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don't is the absolute worst possibility.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] PabloDiscobar@kbin.social 2 points 2 years ago (15 children)

You have not clearly defined the danger. You just said "ai is here". Well, lawyers are here too and they have the law on their side. Also the ai will threaten their model, so they will probably have no mercy anyway and will work full time on the subject.

Wealthy and powerful corporations fear the law above anything else. A single parliament can shut down their activity better than anyone else on the planet.

Maybe you talk from the point of view of a corrupt country like the USA, but the EU parliament, which BTW doesn't host any GAFAM, is totally ready to strike hard on the businesses founded on AI.

See, people doesn't want to lose their job to a robot and they will fight for it. This induces a major threat to the ai: people destroying data centers. They will do it. Their interests will converge with the interest of the people caring about global warming. Don't take the ai as something inevitable. An ai has a high dependency on resources and generates unemployment and pollution, and a questionable value.

An AI requires:

Energy
Water
High tech hardware
Network
Security
Stability
Investments

It's like a nuclear powerplant but more fragile. If an activist group takes down a datacenter hosting an ai, who will blame them? The jury will take turns to high five them.

Wow, you have this all planned out, don't you?

If that's what Europe is like, they'll build their data centers somewhere else. Like the corrupt USA. Again, you'll be taking away your access to AI, not theirs.

load more comments (14 replies)