786
submitted 9 months ago* (last edited 9 months ago) by gedaliyah@lemmy.world to c/technology@lemmy.world

They frame it as though it's for user content, more likely it's to train AI, but in fact it gives them the right to do almost anything they want - up to (but not including) stealing the content outright.

you are viewing a single comment's thread
view the rest of the comments
[-] mods_are_assholes@lemmy.world 8 points 9 months ago

Most of those laws are unenforcable and some are even undetectable.

Your ideology is getting in the way of objective fact.

[-] xor@infosec.pub 3 points 9 months ago

1 & 2 are... #3 is impossible, though...

[-] nintendiator@feddit.cl 1 points 9 months ago

Are you kidding? #3 is the second most possible one of that set, it's just a matter of setting up Reproducible / Deterministic Builds.

If you can't replicate a result with control of the software version + the arts input + the randomness seed, then "something else is going on".

[-] xor@infosec.pub 2 points 9 months ago

deterministic builds?
the "builds" in ai are 1,000's of hours of supercomputers randomly mutating and evolving a gigantic neural network...
the inner workings of such are very much a black box.

to try to save that in a perfectly reproducible way is completely unreasonable, and simply will never happen.

you could require all of the arts input to be documented and saved, but people would lie and you're talking about a very large amount of data being saved for however long... also not really reasonable...

and you also have to understand that there's a lot of countries in the world, computers are all connected on the internet, and ai will just run in other countries, and illegal systems would run in the whatever country is dumb enough to try to but completely unreasonable and expensive extra requirements like that on it.

there's a whole field of study trying to reverse engineer neural networks after they're created... i.e. it's a black box to the people that make it

[-] mods_are_assholes@lemmy.world -3 points 9 months ago

The only way to make a clear text LLM is to convert most of the hard storage that humanity produces for the next ten years into storage, and we'd need about 1/4 the processing power of bitcoin mining to have it run at ChatGPT speeds.

Even said, blackbox self-modifying AIs will be the models that win the usefulness wars, and if one country outlaws them then the only result is they will have no defense against countries that don't feel the need to comply with them.

[-] xor@infosec.pub -1 points 9 months ago

so, your first paragraph isn't true. but i'll point out that bitcoin is mined with ASIC chips entirely now, which only hash bitcoin transactions... they can't compute anything else so it's not really comparable...

second part i do agree with except for self-modifying... although that doesn't seem too far away...

[-] mods_are_assholes@lemmy.world -1 points 9 months ago

You really don't understand how LLM data blobs are created, do you? Nor do you understand how ridiculously compressed it is?

[-] xor@infosec.pub -1 points 9 months ago

what's does that have to do with anything?

this post was submitted on 16 Feb 2024
786 points (98.5% liked)

Technology

59438 readers
4276 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS