this post was submitted on 06 Aug 2025
718 points (98.1% liked)
Showerthoughts
36539 readers
296 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
People keep telling us that ai energy use is very low, but at the same time, companies keep building more and more giant power hungry datacenters. Something simply doesn't add up.
Sure, a small local model can generate text at low power usage, but how useful will that text be, and how many people will actually use it? What I see is people constantly moving to the newest, greatest model, and using it for more and more things, processing more and more tokens. Always more and more.
Each datacenter is set to handle millions of users, so it concentrates all the little requests into very few physical locations.
The tech industry further amplifies things with ambient LLM invocation. You do a random google search, it implicitly does an LLM unasked. When a user is using an LLM enabled code editor, it's making LLM requests every few seconds of typing to drive the autocomplete suggestions. Often it has to submit a new LLM request before the old one even completed because the user typed more while the LLM was chewing on the previous input.
So each LLM invocation may be reasonable, but they are being concentrated impact wise into very few places and invocations are amplified by tech industry being overly aggressive about overuse for the sake of 'ambient magic.