this post was submitted on 16 Jan 2026
83 points (88.1% liked)

Open Source

44015 readers
691 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 6 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] bizarroland@lemmy.world 42 points 2 weeks ago (4 children)

LLMs are tools. They're not replacements for human creativity. They are not reliable sources of truth. They are interesting tools and toys that you can play with.

So have fun and play with them.

[–] selokichtli@lemmy.ml 12 points 2 weeks ago (3 children)

See, it's not fun for the planet.

[–] HiddenLayer555@lemmy.ml 14 points 2 weeks ago (2 children)

Locally run models use a fraction of the energy. Less than playing a game with heavy graphics.

[–] selokichtli@lemmy.ml 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Yes, more or less. But the issue is not about running local models; that's fine even if it's only for curiosity. The issue is about shoving so-called AI in every activity with the promise it will solve most of your everyday problems, or for mere entertainment. I'm not against "AI", I'm against the current commercialization attempts to monopolize the technology by already huge companies that will only seek profit, no matter the state of the planet and the other non-millionaire people. And this is exactly why even a bubble burst is concerning to me, as the poor are the ones that will truly suffer the consequences of billionaires betting in their mansions with their spare palaces.

[–] yogthos@lemmy.ml 3 points 2 weeks ago

The actual problem is the capitalist system of relations. If it's not AI, then it's bitcoin mining, NFTs, or what have you. The AI itself is just a technology, and if it didn't exist, capitalism would find something else to shove down your throat.

[–] m532@lemmygrad.ml 1 points 2 weeks ago

Online models probably use even less than local ones, since they will likely be better optimized, and run on dedicated hardware.

load more comments (2 replies)
[–] geolaw@lemmygrad.ml 10 points 2 weeks ago (2 children)

LLMs consume vast amounts of energy and freash water and release lots of carbon. That is enough for me to not want to "play" with them.

[–] 87Six@lemmy.zip 9 points 2 weeks ago

That's only because they're implemented haphazardly to save as much as possible and produce as fast as possible and basically cut any possible corner

And that's only caused by the leadership of these companies. AI in general is okay. LLM's are meh but I don't specifically see the LLM concept as the devil the same way shovels weren't the devil during the gold rush.

[–] m532@lemmygrad.ml 2 points 2 weeks ago

I have a solution its called china

They have solar panels those neither use water nor produce co2/ch4, they can train the AI (the energy-intensive part)

Then you download the AI from the internet and can use it 100000x and it will use less energy than a washing machine, and neither consume water nor produce co2/ch4

[–] Cowbee@lemmy.ml 7 points 2 weeks ago

Well-said. LLMs do have some useful applications, but they cannot replace human creativity nor are they omniscient.

[–] Sunsofold@lemmings.world 2 points 2 weeks ago

Mostly just toys.

If you can't rely on them more (not 'just as much,' more) than the people who would do whatever the task is, you can't use them for any important task, and you aren't going to find a lot of tasks which are simultaneously necessary and yet unimportant enough that we can tolerate rolling nat 1s on the probability machine all the time.

[–] Zerush@lemmy.ml 15 points 2 weeks ago (3 children)

LLM are the future, but we must still learn to use it correctly. The energy problem depends mainly on 2 things, the use of fossil energy and the abuse of AI including it without need in everything, because the hype, as data logging tool for Big Brother or biased influencers.

You don't need a 4x4 8 cylinder Pick-up to go 2km to the store to buy bread.

[–] dontblink@feddit.it 14 points 2 weeks ago (1 children)

It's simply another case where we have amazing technologies but we lack the right ways to use them, that's what our culture does: creating amazing techs that can solve lots of human problems and then discarding the part that actually solves a problem unless it's also profitable for the individual.

It literally is a problem of people wanting to submit other people for power games, that's not how all societies work, but that's a foundation for ours, but we're playing this game so much that we almost broke the console (planet earth and our own bodies health).

It's an anthropological problem, not a technological one.

[–] Zerush@lemmy.ml 3 points 2 weeks ago (1 children)

This is the point, We have big advances in tech, physic, medicine. science....thanks to AI. But the first use we give it is to create memes, reading BS chats, and build it in fridges, or worst, build it in weapons to kill others.

[–] RIotingPacifist@lemmy.world 3 points 2 weeks ago (5 children)
load more comments (5 replies)
[–] DieserTypMatthias@lemmy.ml 4 points 2 weeks ago (2 children)

You don't need a 4x4 8 cylinder Pick-up to go 2km to the store to buy bread.

In the U.S., yes.

[–] Zerush@lemmy.ml 9 points 2 weeks ago

I was referring to civilised first world countries

[–] HubertManne@piefed.social 2 points 2 weeks ago

no way you could get to the store with only 8 cylinders. what are we? animals!

[–] Tenderizer78@lemmy.ml 4 points 2 weeks ago (2 children)

LLM's in particular don't use that much energy. Image and video generation are the real concerns.

[–] Zerush@lemmy.ml 1 points 2 weeks ago

Well, if one user ask something to a LLM, there are certainly not much sources needed, but yhere are millons of users doing it to thousends of different LLM. That need a lot of server power. Anyway, it's not the primary problem with renevable energy sources, the risks are others, biased information, deep fake, privacy, etc., with the misuse by corporations and political collectives.

load more comments (1 replies)
[–] kadu@scribe.disroot.org 15 points 2 weeks ago

We should reject them.

[–] umbrella@lemmy.ml 14 points 2 weeks ago

shit, we should reclaim all tech. it's all fucking ours.

[–] chgxvjh@hexbear.net 11 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Instead of trying to prevent LLM training on our code, we should be demanding that the models themselves be freed.

You can demand it but it's not an pragmatic demand as you claim. Open weight models aren't equivalent to free software, they are much closer proprietary gratis software. Usually you don't even get access to the training software and the training data and even if you did it would take millions of capital to reproduce them.

But the resulting models must be freed. Any model trained on this code must have its weights released under a compatible copyleft license.

You can put into your license whatever you want but for it to be enforceable it needs to grant licensee additional rights they don't already have without the license. The theory under which tech companies appear to be operating is that they don't in fact need your permission to include your code into their datasets.

block the crawlers, withdraw from centralized forges like GitHub

Moving away from github has become a good idea since Microsoft has purchased it years ago.

You kind of need to block crawlers because of you host large projects they will just max out your servers resources, CPU or bandwidth whatever is the bottleneck.

Github is blocking crawlers too, they have restricted rate limits a lot recently. If you are using nix/nixos which fetches a lot of repositories from github you often can't even finish a build without github credentials nowadays with how rate limited github has become.

load more comments (2 replies)
[–] yogthos@lemmy.ml 11 points 2 weeks ago

This is the correct take. This tech isn't going away, no matter how much whinging people do, the only question is who is going to control it going forward.

[–] DieserTypMatthias@lemmy.ml 10 points 2 weeks ago (1 children)

The problem is not the algorithm. The problem is the way they're trained. If I made a dataset from sources whose copyright holders exercise their IP rights and then train an LLM on it, I'd probably go to jail or just kill myself (or default on my debts to the holders) if they sue for damages.

[–] jackmaoist@hexbear.net 7 points 2 weeks ago (1 children)

I support FOSS LLMs like Qwen just because of that. China doesn't care about IP bullshit and their open source models are great.

[–] yogthos@lemmy.ml 3 points 2 weeks ago

Exactly, open models are basically unlocking knowledge for everyone that's been gated by copyright holders, and that's a good thing.

[–] RIotingPacifist@lemmy.world 8 points 2 weeks ago (2 children)

Seems like the easiest fix is to consider the produce of LLMs to be derivative products of the training data.

No need for a new license, if you're training code on GPL code the code produced by LLMs is GPL.

[–] jbloggs777@discuss.tchncs.de 3 points 2 weeks ago (1 children)

Let me know if you convince any lawmakers, and I'll show you some lawmakers about to be invited to expensive "business" trips and lunches by lobbyists.

[–] RIotingPacifist@lemmy.world 3 points 2 weeks ago (1 children)

The same can be said of the approach described in the article, the "GPLv4" would be useless unless the resulting weights are considered a derivative product.

A paint manufacturer can't claim copyright on paintings made using that paint.

[–] jbloggs777@discuss.tchncs.de 5 points 2 weeks ago* (last edited 2 weeks ago)

Indeed. I suspect it would need to be framed around national security and national interests, to have any realistic chance of success. AI is being seen as a necessity for the future of many countries ... embrace it, or be steamrolled in the future by those who did, so a soft touch is being embraced.

Copyright and licensing uncertainty could hinder that, and the status quo today in many places is to not treat training as copyright infringement (eg. US), or to require an explicit opt-out (eg. EU). A lack of international agreements means it's all a bit wishy washy, and hard to prove and enforce.

Things get (only slightly) easier if the material is behind a terms-of-service wall.

[–] Ferk@lemmy.ml 2 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

You are not gonna protect abstract ideas using copyright. Essentially, what he's proposing implies turning this "TGPL" in some sort of viral NDA, which is a different category of contract.

It's harder to convince someone that a content-focused license like the GPLv3 protects also abstract ideas, than creating a new form of contract/license that is designed specifically to protect abstract ideas (not just the content itself) from being spread in ways you don't want it to spread.

load more comments (2 replies)
[–] CanadaPlus 4 points 2 weeks ago

How dare you break the jerk! /s

[–] fakasad68@lemmy.ml 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Checking whether a proprietary LLM model running on the "cloud" has been trained on a piece of TGPL code would probably be harder than checking if a proprietary binary contains a piece of GPL code, though.

load more comments (1 replies)
load more comments
view more: next ›