this post was submitted on 16 Jan 2026
65 points (86.5% liked)

Open Source

45307 readers
157 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 6 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] bizarroland@lemmy.world 40 points 1 month ago (4 children)

LLMs are tools. They're not replacements for human creativity. They are not reliable sources of truth. They are interesting tools and toys that you can play with.

So have fun and play with them.

[–] selokichtli@lemmy.ml 11 points 1 month ago (3 children)

See, it's not fun for the planet.

[–] HiddenLayer555@lemmy.ml 13 points 1 month ago (1 children)

Locally run models use a fraction of the energy. Less than playing a game with heavy graphics.

[–] selokichtli@lemmy.ml 5 points 1 month ago* (last edited 1 month ago) (1 children)

Yes, more or less. But the issue is not about running local models; that's fine even if it's only for curiosity. The issue is about shoving so-called AI in every activity with the promise it will solve most of your everyday problems, or for mere entertainment. I'm not against "AI", I'm against the current commercialization attempts to monopolize the technology by already huge companies that will only seek profit, no matter the state of the planet and the other non-millionaire people. And this is exactly why even a bubble burst is concerning to me, as the poor are the ones that will truly suffer the consequences of billionaires betting in their mansions with their spare palaces.

load more comments (1 replies)
load more comments (2 replies)
[–] geolaw@lemmygrad.ml 9 points 1 month ago (2 children)

LLMs consume vast amounts of energy and freash water and release lots of carbon. That is enough for me to not want to "play" with them.

[–] 87Six@lemmy.zip 9 points 1 month ago

That's only because they're implemented haphazardly to save as much as possible and produce as fast as possible and basically cut any possible corner

And that's only caused by the leadership of these companies. AI in general is okay. LLM's are meh but I don't specifically see the LLM concept as the devil the same way shovels weren't the devil during the gold rush.

[–] m532@lemmygrad.ml 2 points 1 month ago

I have a solution its called china

They have solar panels those neither use water nor produce co2/ch4, they can train the AI (the energy-intensive part)

Then you download the AI from the internet and can use it 100000x and it will use less energy than a washing machine, and neither consume water nor produce co2/ch4

[–] Cowbee@lemmy.ml 3 points 1 month ago

Well-said. LLMs do have some useful applications, but they cannot replace human creativity nor are they omniscient.

[–] Sunsofold@lemmings.world 1 points 1 month ago

Mostly just toys.

If you can't rely on them more (not 'just as much,' more) than the people who would do whatever the task is, you can't use them for any important task, and you aren't going to find a lot of tasks which are simultaneously necessary and yet unimportant enough that we can tolerate rolling nat 1s on the probability machine all the time.

[–] Zerush@lemmy.ml 13 points 1 month ago (3 children)

LLM are the future, but we must still learn to use it correctly. The energy problem depends mainly on 2 things, the use of fossil energy and the abuse of AI including it without need in everything, because the hype, as data logging tool for Big Brother or biased influencers.

You don't need a 4x4 8 cylinder Pick-up to go 2km to the store to buy bread.

[–] dontblink@feddit.it 10 points 1 month ago (1 children)

It's simply another case where we have amazing technologies but we lack the right ways to use them, that's what our culture does: creating amazing techs that can solve lots of human problems and then discarding the part that actually solves a problem unless it's also profitable for the individual.

It literally is a problem of people wanting to submit other people for power games, that's not how all societies work, but that's a foundation for ours, but we're playing this game so much that we almost broke the console (planet earth and our own bodies health).

It's an anthropological problem, not a technological one.

[–] Zerush@lemmy.ml 3 points 1 month ago (1 children)

This is the point, We have big advances in tech, physic, medicine. science....thanks to AI. But the first use we give it is to create memes, reading BS chats, and build it in fridges, or worst, build it in weapons to kill others.

[–] RIotingPacifist@lemmy.world 3 points 1 month ago (1 children)
[–] Zerush@lemmy.ml 1 points 1 month ago (1 children)

AI in medicine permits the analysis of contagious diseases and the corresponding manufacture of treatments and vaccines in a fraction of the time compared to traditional methods. The manufacture of new materials, research and optimisation in physical, meteorological, and environmental processes, which without AI would have been impossible. The positive effects of AI are undeniable. But as it was said, negative its implementation, its way of using it by people like a child with a new toy, why it is fashionable and cool or biased (commercial or/and politically) AI by big corporations, with AI build in even a Toaster as selling argument.

Artificial Intelligence isn't the real problem, but human intelligence and ethics.

[–] RIotingPacifist@lemmy.world 3 points 1 month ago (3 children)

Do you have examples?

Because most of what you are listing is stuff that has been using ML for years (possibly decades when it comes to meteorology) and just slapped "AI" on as a buzzword.

[–] Zerush@lemmy.ml 2 points 1 month ago

AI exist since the first Chess Bot. Naturally due to the limited HW power in these years, AI applications where very limited. It bcame presenz with the current computing capability, thousends of times more powerfull, see the differences between the PC only the last 25-30 years, nothing to do. even a current low cost smartphone is way better as an high end PC from 15 years ago. It's currently a hype with more than 10.000 AI apps and a cpmpetition between developers and big corporations, with users which abuse it with and for crappy results without common sense, as toy instead of an tool to help in the tasks as it should be, and not for substitute the own work and research. Reason because Bandcamp ban all music made with AI, to protect the artist and their work (https://lemmy.ml/post/41786760). As Example what i mean, it is not the same to use AI to help in a task, as writing aprompt an let the AI do your work, your painting, your music, your research, to sell it as own (mostly without even contrasting it)

load more comments (2 replies)
[–] Tenderizer78@lemmy.ml 3 points 1 month ago (1 children)

LLM's in particular don't use that much energy. Image and video generation are the real concerns.

load more comments (1 replies)
[–] DieserTypMatthias@lemmy.ml 2 points 1 month ago (2 children)

You don't need a 4x4 8 cylinder Pick-up to go 2km to the store to buy bread.

In the U.S., yes.

[–] Zerush@lemmy.ml 8 points 1 month ago

I was referring to civilised first world countries

[–] HubertManne@piefed.social 2 points 1 month ago

no way you could get to the store with only 8 cylinders. what are we? animals!

[–] kadu@scribe.disroot.org 13 points 1 month ago

We should reject them.

[–] DieserTypMatthias@lemmy.ml 10 points 1 month ago (1 children)

The problem is not the algorithm. The problem is the way they're trained. If I made a dataset from sources whose copyright holders exercise their IP rights and then train an LLM on it, I'd probably go to jail or just kill myself (or default on my debts to the holders) if they sue for damages.

[–] jackmaoist@hexbear.net 7 points 1 month ago (1 children)

I support FOSS LLMs like Qwen just because of that. China doesn't care about IP bullshit and their open source models are great.

[–] yogthos@lemmy.ml 2 points 1 month ago

Exactly, open models are basically unlocking knowledge for everyone that's been gated by copyright holders, and that's a good thing.

[–] chgxvjh@hexbear.net 10 points 1 month ago* (last edited 1 month ago) (2 children)

Instead of trying to prevent LLM training on our code, we should be demanding that the models themselves be freed.

You can demand it but it's not an pragmatic demand as you claim. Open weight models aren't equivalent to free software, they are much closer proprietary gratis software. Usually you don't even get access to the training software and the training data and even if you did it would take millions of capital to reproduce them.

But the resulting models must be freed. Any model trained on this code must have its weights released under a compatible copyleft license.

You can put into your license whatever you want but for it to be enforceable it needs to grant licensee additional rights they don't already have without the license. The theory under which tech companies appear to be operating is that they don't in fact need your permission to include your code into their datasets.

block the crawlers, withdraw from centralized forges like GitHub

Moving away from github has become a good idea since Microsoft has purchased it years ago.

You kind of need to block crawlers because of you host large projects they will just max out your servers resources, CPU or bandwidth whatever is the bottleneck.

Github is blocking crawlers too, they have restricted rate limits a lot recently. If you are using nix/nixos which fetches a lot of repositories from github you often can't even finish a build without github credentials nowadays with how rate limited github has become.

load more comments (2 replies)
[–] yogthos@lemmy.ml 7 points 1 month ago

This is the correct take. This tech isn't going away, no matter how much whinging people do, the only question is who is going to control it going forward.

[–] RIotingPacifist@lemmy.world 6 points 1 month ago (2 children)

Seems like the easiest fix is to consider the produce of LLMs to be derivative products of the training data.

No need for a new license, if you're training code on GPL code the code produced by LLMs is GPL.

[–] jbloggs777@discuss.tchncs.de 2 points 1 month ago (1 children)

Let me know if you convince any lawmakers, and I'll show you some lawmakers about to be invited to expensive "business" trips and lunches by lobbyists.

[–] RIotingPacifist@lemmy.world 3 points 1 month ago (1 children)

The same can be said of the approach described in the article, the "GPLv4" would be useless unless the resulting weights are considered a derivative product.

A paint manufacturer can't claim copyright on paintings made using that paint.

[–] jbloggs777@discuss.tchncs.de 4 points 1 month ago

Indeed. I suspect it would need to be framed around national security and national interests, to have any realistic chance of success. AI is being seen as a necessity for the future of many countries ... embrace it, or be steamrolled in the future by those who did.

Copyright and licensing uncertainty could hinder that, and the status quo today in many places is to not treat training as copyright infringement (eg. US), or to require an explicit opt-out (eg. EU). A lack of international agreements means it's all a bit wishy washy, and hard to prove and enforce.

Things get (only slightly) easier if the material is behind a terms-of-service wall.

[–] Ferk@lemmy.ml 1 points 1 month ago* (last edited 1 month ago) (2 children)

You are not gonna protect abstract ideas using copyright. Essentially, what he's proposing implies turning this "TGPL" in some sort of viral NDA, which is a different category of contract.

It's harder to convince someone that a content-focused license like the GPLv3 protects also abstract ideas, than creating a new form of contract/license that is designed specifically to protect abstract ideas (not just the content itself) from being spread in ways you don't want it to spread.

load more comments (2 replies)
[–] CanadaPlus@lemmy.sdf.org 3 points 1 month ago

How dare you break the jerk! /s

[–] fakasad68@lemmy.ml 1 points 1 month ago* (last edited 1 month ago) (1 children)

Checking whether a proprietary LLM model running on the "cloud" has been trained on a piece of TGPL code would be probably harder than checking if a proprietary binary contains a piece of GPL code, though.

[–] yogthos@lemmy.ml 1 points 1 month ago

Not necessarily, the models can often be tricked into spilling the beans of how they were trained.

load more comments
view more: next ›