this post was submitted on 06 Aug 2025
718 points (98.1% liked)

Showerthoughts

36539 readers
296 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

In the late 2000s there was a push by a lot of businesses to not print emails and people use to add a 'Please consider this environment before printing this email.'

Considering how bad LLMs/'ai' are with power consumption and water usage a new useless tag email footer should be made.

you are viewing a single comment's thread
view the rest of the comments
[–] fading_person@lemmy.zip 3 points 4 days ago (1 children)

People keep telling us that ai energy use is very low, but at the same time, companies keep building more and more giant power hungry datacenters. Something simply doesn't add up.

Sure, a small local model can generate text at low power usage, but how useful will that text be, and how many people will actually use it? What I see is people constantly moving to the newest, greatest model, and using it for more and more things, processing more and more tokens. Always more and more.

[–] jj4211@lemmy.world 3 points 4 days ago

Each datacenter is set to handle millions of users, so it concentrates all the little requests into very few physical locations.

The tech industry further amplifies things with ambient LLM invocation. You do a random google search, it implicitly does an LLM unasked. When a user is using an LLM enabled code editor, it's making LLM requests every few seconds of typing to drive the autocomplete suggestions. Often it has to submit a new LLM request before the old one even completed because the user typed more while the LLM was chewing on the previous input.

So each LLM invocation may be reasonable, but they are being concentrated impact wise into very few places and invocations are amplified by tech industry being overly aggressive about overuse for the sake of 'ambient magic.