this post was submitted on 11 Feb 2026
503 points (97.7% liked)

memes

19953 readers
2271 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/Ads/AI SlopNo advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] sp3ctr4l@lemmy.dbzer0.com 5 points 1 day ago* (last edited 1 day ago) (2 children)

I mean, you can run an LLM locally, its not that hard.

And you can run such a local machine off of solar power, if you have an energy efficient setup.

It is possible to use this tech in a way that is not horrendously evil, and instead merely somewhat questionable, lol.

Hell, I guess you could arguably literally warm a room of your home with your conversations.

[–] Wildmimic@anarchist.nexus 4 points 1 day ago (2 children)

I run my LLM locally, and still have to turn the heating on because it's not enough power. A high end card is normally rated for about 300W - and it's only running in short bursts to answer questions. so if you are really pushing it, over time you will probably reach around 150W/h - that's not enough at all. You would for sure use more power playing a game using Unreal Engine 5.

Power consumption of LLM's is a lot lower than people think. And running it in a data center will surely be more energy efficient than my aging AM4 platform.

[–] sp3ctr4l@lemmy.dbzer0.com 2 points 1 day ago* (last edited 1 day ago) (1 children)

I run mine on a Steam Deck.

Fairly low power draw on that lol.

Though I'm using it as a coding assistant... not a digital girlfriend.

[–] Wildmimic@anarchist.nexus 3 points 16 hours ago (1 children)

Mine is not really a girlfriend, it's more like an platonic ADHD-riddled mentor helping me out with RegEx, Bash-scripts and python. My coding experience is decades old now, and i love how easily you can integrate programming into the everyday usage of a pc on linux - I've used Windows for so long, where this is all abstracted away; This feels much more like i am in control.

My Steam Deck doesn't run an LLM, but it has 2,5TB Storage in total and is transparent. It's wild that you can run an LLM on it, which model do you use?

[–] sp3ctr4l@lemmy.dbzer0.com 2 points 12 hours ago* (last edited 12 hours ago)

Qwen3, 8B parameter model, seems to be the most generally comprehensive model I can run on it, via the Alpaca flatpak.

(Though I should note that Alpaca just recently revamped how it works internally, as currently has a few bugs that resulted from this, that its dev is working out.)

Its not fast in terms of like a realtime back and forth conversation, but, it is pretty good at a lot of things, at least up to the conclusion of its training data set. So it works fairly well if you describe a scenario to it, and then ask it to mock up like you say, a complex regex term, or a moderately complex bash or python file.

You can also say like hey, I have a semi-thought out idea for an app or feature, or just a fairly complex function, outline a number of possible specific methods or mathematical algorithms we might be able to use to achive this, and it'll mock out a project outline, and then you can have it develop the smaller components singly... sometimes this works, sometimes it makes syntax or conceptual or logical errors.

It also generally works for refactoring a single script toward being either more modular or more monolithic, but when you have it try to consider how to refactor a complex project of many scripts, well you'll basically exceed its capacity to keep everything straight.

If you want a snappier though less comprehensive model, 3B parameter models are a good deal quicker, they'd probably be what you want for like, a relationship with a sycophant/airhead/confidently incorrect person, lol.

[–] Sadbutdru@sopuli.xyz 1 points 23 hours ago (1 children)
[–] Wildmimic@anarchist.nexus 0 points 17 hours ago (1 children)

its Wh of course - i don't know why, but my brain always thinks its Watts per hour, even tho it's Watts times hours.

[–] ulterno@programming.dev 0 points 3 hours ago

Perhaps you need to tweak some of your weights ?:P

[–] nandeEbisu@lemmy.world 1 points 22 hours ago

As far as energy goes, its a matter of degree. LLMs are mainly bad emissions-wise because of the volume of calls being made. If you're running it on your GPU, you could have been playing a game or something similarly emitting.

The issue is more image generation models which are 1000 times worse https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/

Original Paper: https://arxiv.org/pdf/2311.16863

A moderately sized text-to-text model that you would run locally is about 10g of carbon for 1000 inferences which is driving a car about 1/40th of a mile. Even assuming your model is running in some kind of agentic loop, maybe 5 inferences / actual response (though it could be dozens depending on the architecture) that gets to you, that's 10gcarbon / 200 messages to your model which is at least 2-3 sessions on the heavy end I would think. You could use it for a year and its equivalent to driving 3 miles if you do that every day.

Image generation, however, is 1000-1500x that so just chatting with your GF isn't that bad. Generating images is where it really adds up.

I wouldn't trust these numbers exactly, they're more ball-park. There's optimizations that they don't include and there's a million other variables that could make it more expensive. I doubt it would be more than 10-20 miles in a car / year for really heavy usage though.