this post was submitted on 31 Oct 2025
87 points (98.9% liked)

askchapo

23174 readers
48 users here now

Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.

Rules:

  1. Posts must ask a question.

  2. If the question asked is serious, answer seriously.

  3. Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.

  4. Try !feedback@hexbear.net if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.

founded 5 years ago
MODERATORS
 

As a general observation, I find that the more right-leaning a person is, the more they tend to be receptive to the usage and adoption of "AI". And inversely, the more left-leaning, the more skeptical.

I pin this on the notion that most conservatives hate workers, are happy to see them laid off, etc. Whereas more progressive folks tend to see value in what human beings do.

Moreover, communists like ourselves almost completely dismiss the plagarism slop machines as being utterly misanthropic, not to mention flying in the face of the labour theory of value.

As an anecdote, I work with a conservative guy who puts EVERYTHING through Grok. Almost everything he types/says to his team mates he gets Grok to write for him. Everything he "fact-checks" goes through Grok. He views it as totally impartial, without bias, etc.

On the other hand, I think more critically-minded folks are prone to seeing the inherent bias in these chatbot slop machines, and view them with skepticism in the same way they view all other institutions in society.

Clearly I am generalising a lot here, but has anyone else made the same or similar observation?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Xavienth@lemmygrad.ml 4 points 3 weeks ago (1 children)

You're actually wrong about the specifics of energy usage. The vast majority of energy usage of a model comes from its usage, not from its training. But you are right that the energy usage of an individual prompt is relatively small, roughly comparable to 15 or so Google searches.

The problem is when you process billions of prompts every day.

[โ€“] StinkySocialist@lemmy.ml 3 points 2 weeks ago

Had to google it to check but you are right. The sum of all the energy used to prompt a model over its life time is usually greater than whats need to train it to begin with.

I didn't know that but that makes sense. I meant more like prompting The Thing Once isnt that big of an energy drain where as the intial training is. Average of 0.34 watt-hours per prompt but to train GPT-3 cost around 1.3 gigawatt-hours (GWh) and GPT-4 requiring an estimated 62.3 GWh. I see all these memes about how prompting an llm once is super wasteful and thats the misconception i was addressing.