this post was submitted on 01 Jul 2025
2200 points (98.4% liked)

Microblog Memes

10442 readers
3233 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If a post is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Be nice. Take political debates to the appropriate communities. Take personal disagreements to private messages.
  7. No advertising, brand promotion, or guerrilla marketing.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] PeriodicallyPedantic@lemmy.ca 2 points 7 months ago

I'm not sure that's true, if you look up things like "tokens per kwh" or "tokens per second per watt" you'll get results of people measuring their power usage while running specific models in specific hardware. This is mainly for consumer hardware since it's people looking to run their own AI servers who are posting about it, but it sets an upper bound.

The AI providers are right lipped about how much energy they use for inference and how many tokens they complete per hour.

You can also infer a bit by doing things like looking up the power usage of a 4090, and then looking at the tokens per second perf someone is getting from a particular model on a 4090 (people love posting their token per second performance every time a new model comes out), and extrapolate that.