this post was submitted on 26 Dec 2025
31 points (97.0% liked)

Technology

41085 readers
47 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

How the upcoming AI legislations around the world, like voice cloning prevention and disclosure requeriment of techincal details of models, will affect open source or selfhosted models?

top 17 comments
sorted by: hot top controversial new old
[–] riskable@programming.dev 10 points 3 days ago (4 children)

How do you implement voice cloning prevention? Human voices aren't that unique. Also, AI voice cloning isn't perfect. So... At what threshold is a voice considered, "cloned" from a legal perspective?

I mean, people couldn't tell the difference between Scarlet Johansson and OpenAI's "Sky" voice which was not cloned.

[–] Kissaki@beehaw.org 1 points 1 day ago* (last edited 1 day ago)

If you're selling or publishing a voice in a way that impersonates another person without their consent that may be identifiable and prosecutable. "Generate with x voice." 'Talk to x." Etc. Exact lettering is no necessary if intent is evident from pictures or evasive descriptions making an obvious implication.

If prosecution can find evidence of cloning/training that can also serve as basis.

In these ways it doesn't have to be about similarity of the produced voice, of quality or alternative people, at all.

[–] kayzeekayzee@lemmy.blahaj.zone 6 points 3 days ago* (last edited 3 days ago)

I think the main idea is to codify the act as illegal, so if it's discovered that someone used voice cloning (for like a telephone scam or something), then they can be charged for that too. But yeah it might be hard to prove without a lot of evidence

[–] lowspeedchase@lemmy.dbzer0.com 1 points 3 days ago (2 children)

Human voices aren’t that unique.

Duuuuuuuddeeeeee lol... come on now.

[–] Powderhorn@beehaw.org 4 points 3 days ago

My ex had an uncle with exactly my voice. Cadence, accent, inflection ... it was uncanny.

[–] riskable@programming.dev 1 points 3 days ago

From the perspective of human perception, people's voices are only unique enough to about one in a few thousand. There's a few outliers with much more unique voices but believe it or not, there's a lot of people walking around on this earth that sound just like Morgan Freeman, James Earl Jones, and other voices people think are super unique.

I view an anti-cloning law as too risky: It sounds exactly like the type of thing that would prevent Grandma from cloning her own voice before going down for surgery because it just so happens to sound a lot like a famous person.

[–] BCsven@lemmy.ca 1 points 3 days ago (2 children)

One country was already setting up copyright on your voice so AI can be served takedown notices. Voice are quite unique, its how my bank verifies who I am. If somebody clones my voice via AI it could fool that login system

[–] riskable@programming.dev 4 points 3 days ago (1 children)

I work for a huge bank and we tested voice recognition technology: Even under the best circumstances (high quality microphone with no ambient noise in a sound booth), it was far, far too easy to copy someone else's voice by simply playing back a sliced up recording a la Sneakers (the movie). We ruled it out as an option over a decade ago.

The problem was fundamental and had nothing to do with the quality of the technology. If your bank is using your voice as a unique identifier they had better be using something else in addition to it! Because it's super insecure.

[–] BCsven@lemmy.ca 1 points 3 days ago

There are other criteria like account number, etc. But the voice they ask you specific question live. But I get it. Thats why I have a hardware key for platforms that support it.

[–] krooklochurm@lemmy.ca 1 points 3 days ago* (last edited 3 days ago) (1 children)

Your bank is run by fucking morons if they've allowed voice verification at any point after ~2 years ago.

It's a kind of profound, logarithmic stupidity that increases exponentially every day as voice cloning technology gets better and better.

They are fucking stupid and don't give one minuscule fuck about the security of your account.

[–] BCsven@lemmy.ca 1 points 2 days ago

Well yes they are a bank. Lol. I moved regular accounts to a credit union because I was sick of the bank's problems. But still have a disability retirement fund with the bank because its a special government account for my child.

[–] Ilixtze@lemmy.ml 4 points 3 days ago (1 children)

I would suggest that AI regulation should affect ALL models. No one should be exempt. I would also love AI regulation that makes it mandatory to Tag Ai generated or assisted content as such.

[–] riskable@programming.dev 1 points 3 days ago (1 children)

Before any of that can happen we need some non-ambiguous definitions of what "AI" is.

[–] Ilixtze@lemmy.ml 1 points 1 day ago

Jordan peterson: "But define AI first"

[–] spit_evil_olive_tips@beehaw.org 3 points 3 days ago (1 children)

upcoming AI legislations around the world

this is so broad that it is impossible to answer.

if you can point to an individual piece of legislation and its actual text (in other words, not just a politician saying "we should regulate such-and-such" but actually writing out the proposed law) then it would be possible to read the text and at least try to figure it out.

[–] ryujin470@fedia.io 1 points 2 days ago

@spit_evil_olive_tips@beehaw.org One example is the EU AI Act. Their requeriments for open source models are very lenient, only requiring summary of training data and disclosure of training details like the computational power used. But proprietary models, on the other hand, are required to implement content filtering etc.

[–] vegeta@lemmy.dbzer0.com 2 points 3 days ago

That's the neat part, it won't