this post was submitted on 31 Oct 2025
87 points (98.9% liked)

askchapo

23164 readers
86 users here now

Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.

Rules:

  1. Posts must ask a question.

  2. If the question asked is serious, answer seriously.

  3. Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.

  4. Try !feedback@hexbear.net if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.

founded 5 years ago
MODERATORS
 

As a general observation, I find that the more right-leaning a person is, the more they tend to be receptive to the usage and adoption of "AI". And inversely, the more left-leaning, the more skeptical.

I pin this on the notion that most conservatives hate workers, are happy to see them laid off, etc. Whereas more progressive folks tend to see value in what human beings do.

Moreover, communists like ourselves almost completely dismiss the plagarism slop machines as being utterly misanthropic, not to mention flying in the face of the labour theory of value.

As an anecdote, I work with a conservative guy who puts EVERYTHING through Grok. Almost everything he types/says to his team mates he gets Grok to write for him. Everything he "fact-checks" goes through Grok. He views it as totally impartial, without bias, etc.

On the other hand, I think more critically-minded folks are prone to seeing the inherent bias in these chatbot slop machines, and view them with skepticism in the same way they view all other institutions in society.

Clearly I am generalising a lot here, but has anyone else made the same or similar observation?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Shinji_Ikari@hexbear.net 4 points 1 week ago (1 children)

I'm currently considering trying to use a chatbot to semi-intelligently ocr a PDF to pull things out of a table and into a csv because it's like 400 entries, but then I keep thinking about how I'll have to check over that work and wondering if it's even worth trying to automate or if I should put on headphones with something upbeat and knock it out in an hour or two correctly.

The lack of correctness and the inability to trust it basically makes it useless for anyone who wants to do stuff right.

[โ€“] StinkySocialist@lemmy.ml 1 points 1 week ago (1 children)

I think there are ways to minimize it. My job pays for gemini and i frequently use it to ocr serial numbers off scanned in pdfs. I can check these against records i already have so there is less chance for bad data to slip through. Maybe use a second llm to ocr it too and compare the results. Line both results up in the same spread sheet and highlight duplicate values. Anything thats not highlighted the llms got different results on and needs to be double checked. ๐Ÿคท Idk just a thought

[โ€“] Shinji_Ikari@hexbear.net 3 points 1 week ago (1 children)

For this task in particular, this would be somewhat foundational to a design and a believable but incorrect value could incur thousands of dollars in mistakes and time later on, some far harder to debug than others. It's essentially an age old battle between my brain and interacting with spreadsheets That I just need to get over. It would be cool if you could use llms in adversarial forms where they look to prove another llm wrong or verify output to some 3-4 9s of accuracy but I have a brain and can do that too.

I've worked on various hard problems that hit the limits of the llms pretty quickly. It's frustrating because so much of the information that used to be on the Internet is gone now, and what's left can't be found due to how bad search engines have gotten, and even using the llm as a search engine just pops up the same webpages I've already deemed as unhelpful.

[โ€“] StinkySocialist@lemmy.ml 2 points 1 week ago

Damn, well best of luck with that task then. I dread tedious work like that.

I definitely agree about search engines. I miss old google ๐Ÿ˜ญ