391
submitted 1 week ago by neme@lemm.ee to c/opensource@programming.dev
top 50 comments
sorted by: hot top controversial new old

I know people are gonna freak out about the AI part in this.

But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.

[-] kautau@lemmy.world 89 points 1 week ago* (last edited 1 week ago)

The most important part is that it’s a local ~~LLM~~ model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.

[-] technomad@slrpnk.net 35 points 1 week ago

Also the incessant ammounts of power/energy that they consume.

[-] jsomae@lemmy.ml 19 points 1 week ago

Running an llm llocally takes less power than playing a video game.

[-] vividspecter@lemm.ee 10 points 1 week ago

The training of the models themselves also takes a lot of power usage.

[-] jonjuan@programming.dev 0 points 2 days ago

They are using open source models that have already been trained. So no extra energy is going into the models.

[-] vividspecter@lemm.ee 1 points 1 day ago

Of course, I mean the training of the original models that the function is dependent on. It's not caused by VLC itself of course.

load more comments (5 replies)
load more comments (2 replies)
[-] mormund@feddit.org 38 points 1 week ago

Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.

load more comments (2 replies)
[-] hushable@lemmy.world 19 points 1 week ago

Indeed, YouTube had auto generated subtitles for a while now and they are far from perfect, yet I still find it useful.

load more comments (1 replies)
[-] TheImpressiveX@lemm.ee 74 points 1 week ago

Et tu, Brute?

VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!

Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.

[-] neme@lemm.ee 57 points 1 week ago

Hopefully better than YouTube's, those are often pretty bad, especially for non-English videos.

They are terrible.

[-] moosetwin@lemmy.dbzer0.com 16 points 1 week ago

Youtube's removal of community captions was the first time I really started to hate youtube's management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven't found a replacement for it (at least, one that actually works)

[-] moosetwin@lemmy.dbzer0.com 17 points 1 week ago

and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!

load more comments (1 replies)
[-] wazzupdog@lemmy.blahaj.zone 15 points 1 week ago

They're awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it's exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I'm my experience "using" it i find it nigh unusable.

[-] Swedneck@discuss.tchncs.de 1 points 4 days ago

ELEVUHN
ELEVUHN

load more comments (1 replies)
[-] MoSal@lemm.ee 8 points 1 week ago

I've been working on something similar-ish on and off.

There are three (good) solutions involving open-source models that I came across:

  • KenLM/STT
  • DeepSpeech
  • Vosk

Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.

load more comments (1 replies)
load more comments (7 replies)
[-] GenderNeutralBro@lemmy.sdf.org 15 points 1 week ago

In my experiments, local Whisper models I can run locally are comparable to YouTube's — which is to say, not production-quality but certainly better then nothing.

I've also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I'm put off by the mention of cloud services. Depends on how they implement it.

load more comments (5 replies)
[-] Evil_Shrubbery@lemm.ee 49 points 1 week ago

All hail the peak humanity levels of VLC devs.

FOSS FTW

[-] cupcakezealot@lemmy.blahaj.zone 40 points 1 week ago

accessibility is honestly the first good use of ai. i hope they can find a way to make them better than youtube's automatic captions though.

[-] HK65@sopuli.xyz 14 points 1 week ago

There are other good uses of AI. Medicine. Genetics. Research, even into humanities like history.

The problem always was the grifters who insist calling any program more complicated than adding two numbers AI in the first place, trying to shove random technologies into random products just to further their cancerous sales shell game.

The problem is mostly CEOs and salespeople thinking they are software engineers and scientists.

[-] yonder@sh.itjust.works 10 points 1 week ago

I know Jeff Geerling on Youtube uses OpenAIs Whisper to generate captions for his videos instead of relying on Youtube's. Apparently they are much better than Youtube's being nearly flawless. I would have a guess that Google wants to minimize the compute that they use when processing videos to save money.

[-] jol@discuss.tchncs.de 9 points 1 week ago

The app Be My Eyes pivoted from crowd sourced assistance to the blind, to using AI and it's just fantastic. AI is truly helping lots of people in certain applications.

load more comments (4 replies)
[-] pastaPersona@lemmy.world 29 points 1 week ago

I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.

In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.

I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.

[-] x00z@lemmy.world 17 points 1 week ago

Autogenerated subtitles are pretty awesome for subtitle editors I'd imagine.

[-] vrighter@discuss.tchncs.de 22 points 1 week ago

even if they get the words wrong, but the timestamps right, it'd still save a lot of time

[-] glimse@lemmy.world 8 points 1 week ago

We started doing subtitling near the end of my time as an editor and I had to create the initial English ones (god forbid we give the translation company another couple hundred bucks to do it) and yeah....the timestamps are the hardest part.

I can type at 120 wpm but that's not very helpful when you can only write a sentence at a time

load more comments (6 replies)
[-] ArgentRaven@lemmy.world 11 points 1 week ago

Yeah this is exactly what we should want from AI. Filling in an immediate need, but also recognizing it won't be as good as a pro translation.

load more comments (1 replies)
load more comments (1 replies)
[-] Alice@beehaw.org 27 points 1 week ago

My experience with generated subtitles is that they're awful. Hopefully these are better, but I wish human beings with brains would make them.

[-] lime@feddit.nu 20 points 1 week ago

subtitling by hand takes sooooo fucking long :( people who do it really are heroes. i did community subs on youtube when that was a thing and subtitling + timing a 20 minute video took me six or seven hours, even with tools that suggested text and helped align it to sound. your brain instantly notices something is off if the subs are unaligned.

[-] Alice@beehaw.org 14 points 1 week ago

Oh shit, I knew it was tedious but it sounds like I seriously underestimated how long it takes. Good to know, and thanks for all you've done.

Sounds to me like big YouTubers should pay subtitlers, but that's still a small fraction of audio/video content in existence. So yeah, I guess a better wish would be for the tech to improve. Hopefully it's on the right track.

load more comments (1 replies)
load more comments (3 replies)
[-] mhague@lemmy.world 25 points 1 week ago

Solving problems related to accessibility is a worthy goal.

[-] Sunshine@lemmy.ca 20 points 1 week ago
load more comments (1 replies)
[-] nossaquesapao@lemmy.eco.br 18 points 1 week ago

It's nice to see a good application of ai. I hope my low end stuff will be able to run it.

[-] moosetwin@lemmy.dbzer0.com 15 points 1 week ago

I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.

[-] leftytighty@slrpnk.net 15 points 1 week ago

subtitles aren't a unique dataset it's just audio to text

[-] nova_ad_vitum@lemmy.ca 12 points 1 week ago

They may have to give it some special training to be able to understand audio mixed by the Chris Nolan school of wtf are they saying.

load more comments (2 replies)
[-] Warl0k3@lemmy.world 9 points 1 week ago

I hope they're using Open Subtitles, or one of the many academic Speech To Text datasets that exist.

[-] Feathercrown@lemmy.world 11 points 1 week ago

And yet they still can't seek backwards

[-] OsrsNeedsF2P@lemmy.ml 26 points 1 week ago

Iirc this is because of how they've optimized the file reading process; it genuinely might be more work to add efficient frame-by-frame backwards seeking than this AI subtitle feature.

That said, jfc please just add backwards seeking. It is so painful to use VLC for reviewing footage. I don't care how "inefficient" it is, my computer can handle any operation on a 100mb file.

[-] Feathercrown@lemmy.world 10 points 1 week ago

If you have time to read the issue thread about it, it's infuriating. There are multiple viable suggestions that are dismissed because they don't work in certain edge cases where it would be impossible for any method at all to work, and which they could simply fail gracefully for.

[-] stevestevesteve@lemmy.world 7 points 1 week ago

That kind of attitude in development drives me absolutely insane. See also: support for DHCPv6 in Android. There's a thread that has been raging for I think over a decade now

load more comments (2 replies)
load more comments (1 replies)
[-] DepressedMan@reddthat.com 10 points 1 week ago

Perhaps we could also get a built-in AI tool for automatic subtitle synchronization?

[-] clutchtwopointzero@lemmy.world 10 points 1 week ago

I am still waiting for seek previews

load more comments (1 replies)
[-] jagged_circle@feddit.nl 9 points 1 week ago
load more comments (6 replies)
[-] r_deckard@lemmy.world 8 points 1 week ago

I've been waiting for ~~this~~ break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn't need any different codecs launched, it's technically identical to the current track, there's no need to have a break. /rant

load more comments
view more: next ›
this post was submitted on 09 Jan 2025
391 points (99.2% liked)

Opensource

1666 readers
193 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS