this post was submitted on 31 Oct 2025
87 points (98.9% liked)
askchapo
23164 readers
105 users here now
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try !feedback@hexbear.net if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ive definitely noticed. I find the strength of opinions odd on both sides. Like conservatives treat AI like is a genius human who has a phd in everything and thats dangerously false obviously. LLMs make basic spelling and math error ans frequently hallucinate misinformation.
On the other hand the libs tend to demonize the hell out of the tech. One example is the energy costs. There is a misconception about how much energy it takes to prompt an llm. Doing so is pretty damn low power cost. I have a 10 year old desktop i use as a server. If can run a 8 billion parameter version of gemni while it's also streaming my music on jellyfin no problem. The reason for this misconception is there is a huge amount of energy needed to train these thinga ans having every conpany and their mother make their own is a huge waste.
My personal opinion is that ai is like a a small team of dumbass interns. Its great for grunt work and busy work and thats about it. For example, one day my bosses boss decided that i need to update our approved software list with a paragraph description of each and every software listed. 900+ approved pieces of software and 400+ banned. He assigned this to me and one coworker. Told us its urgent.... Bullshit but i need my job. So google sheets has a function where you can point to another cell and add a seperate prompt for gemini to fill the cell based on it. Dude i had that whole list done in a minute. It was like commanding a small army of interns. Did the ai make up incorrect descriptions for like 50 pieces of software? Yes. Does it matter and do i give a flying fuck? No
If you made it this far, thanks for reading my 2 cents comrade ๐ซก
The anecdote you pointed out is just executives being executives.
Knowing that LLMs are being used to patch holes that may or may not come off under pressure because workers are so heavily exploited and abuse is not a point of confidence in this technology being used properly industry wide in the states.
Also 8B isn't really that much and is far, far away from what the AI companies offer. I think models like deepseek OCR will trivialize this at some point, but I think LLMs as a whole will be a comparative nothing burger while the industry is treating it like the messiah come again.
I think its not useful to target the tech itself like liberals do but to point out how this is part of a larger process of techopolies consuming more and more fake capital, which leads to the workers losing in the end. Micro transactions and gambling, freemium, SaaS, "smart" IOT, reducing right to repair, the applefication of the industry, all fall into the hole of where LLMs are falling into.
Nothing you said is wrong and i mostly agree. I talked about the 8b parameter one because it can even run on 10 year old hardware and is just as useful as the bigger ones to me. I think it starts to be diminishing returns. Like you said tech capitalists think its the messiah and put too much time money and resources into it. But also lile you said thats a problem with them and their view of the tech not the techbology itself.
I'm currently considering trying to use a chatbot to semi-intelligently ocr a PDF to pull things out of a table and into a csv because it's like 400 entries, but then I keep thinking about how I'll have to check over that work and wondering if it's even worth trying to automate or if I should put on headphones with something upbeat and knock it out in an hour or two correctly.
The lack of correctness and the inability to trust it basically makes it useless for anyone who wants to do stuff right.
I think there are ways to minimize it. My job pays for gemini and i frequently use it to ocr serial numbers off scanned in pdfs. I can check these against records i already have so there is less chance for bad data to slip through. Maybe use a second llm to ocr it too and compare the results. Line both results up in the same spread sheet and highlight duplicate values. Anything thats not highlighted the llms got different results on and needs to be double checked. ๐คท Idk just a thought
For this task in particular, this would be somewhat foundational to a design and a believable but incorrect value could incur thousands of dollars in mistakes and time later on, some far harder to debug than others. It's essentially an age old battle between my brain and interacting with spreadsheets That I just need to get over. It would be cool if you could use llms in adversarial forms where they look to prove another llm wrong or verify output to some 3-4 9s of accuracy but I have a brain and can do that too.
I've worked on various hard problems that hit the limits of the llms pretty quickly. It's frustrating because so much of the information that used to be on the Internet is gone now, and what's left can't be found due to how bad search engines have gotten, and even using the llm as a search engine just pops up the same webpages I've already deemed as unhelpful.
Damn, well best of luck with that task then. I dread tedious work like that.
I definitely agree about search engines. I miss old google ๐ญ
I do wonder, how many prompts does it take to get what you want? And how many people input prompts in the same way I click a "clicky pen" when I get my hands on one and filled with nervous energy?
You and another commenter have good points about rhe bigger models ans how many prompts users hit them with. I think its dimishing returns after about 8 billion parameters and you can run those ones on old hardware. My home server is a 10 year old desktop. It cost $200 bucks to buy used last year and i didnt notice the energy costs. Me and my wife try to use it for any thing wed use an online one for instead. Probably only gets prompted about 10 times a week between the two of us.
๐ค I actually have a energy meter thing i could plug the the server into. I could do 100 prompts and tell you how much energy it ate for the day. Anybody interested?
yea do it
You got it dude. Feel free to ping me if i forget to respond. Im a very forgetful comrade ๐ซก
me catching up on my inbox
Oh yeah! ... somewhere in the many drawers of gadgets and gizmos there is a Kill-A-Watt meter in my house that I was wanting to use for a project...
You got drawers for your gadgets and gizmos? I'm jealous all I got our unsorted canvas bins lmao.
Too many mice and rats around to trust canvas bins.
You're actually wrong about the specifics of energy usage. The vast majority of energy usage of a model comes from its usage, not from its training. But you are right that the energy usage of an individual prompt is relatively small, roughly comparable to 15 or so Google searches.
The problem is when you process billions of prompts every day.
Had to google it to check but you are right. The sum of all the energy used to prompt a model over its life time is usually greater than whats need to train it to begin with.
I didn't know that but that makes sense. I meant more like prompting The Thing Once isnt that big of an energy drain where as the intial training is. Average of 0.34 watt-hours per prompt but to train GPT-3 cost around 1.3 gigawatt-hours (GWh) and GPT-4 requiring an estimated 62.3 GWh. I see all these memes about how prompting an llm once is super wasteful and thats the misconception i was addressing.