I think it's more of a common cause thing: a lack of curiosity and critical thinking skills make you both more likely to be conservative and more likely to trust AI.
askchapo
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try !feedback@hexbear.net if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
I'd take it a step further to a fear of curiosity and critical thinking, because once you engage with the world, you start seeing how complicated it is, and how complex the problems actually are, and how difficult the solutions are
Conservatives want simplicity, ease. They are nostalgic for the simplicity of childhood. They crave the bliss of ignorance.
Conservativism is basically weaponized Daddy issues and Grok is the new Father figure.
they're kindred spirits, one's a confidently-wrong robot and the other is an LLM


my armchair psychologist take is that it's related to the conservative tendency to struggle with ambiguity/uncertainty. LLMs always have an answer, will assert that their answer is correct, and since it's a computer program, they figure it has to be neutral and giving them accurate output at least 99% of the time.
conservativesl tendency to struggle with ambiguity/uncertainty.
Reactionaries hating abstract art comes to mind. Or how they don't understand subtext/literary themes (look at how many of them wish they were Tyler Durden, Patrick Bateman, Walter White, or Don Draper). I think a lot of it comes from not wanting their worldview challenged/altered.
AI gives immediate output conforming to their worldview. An abstract painting challenges their understanding of art. Nazis would have loved Grok because they could do what Trump/Musk do with Grok.
the conservative tendency to struggle with ambiguity/uncertainty
One example that comes to mind was telling a conservative former friend about a dream I had, in which a human and an alien in a first contact situation created a game together, which they then used to determine who would get a resource that could only save one of their species, both of which were threatened with extinction. The dream ended with the human winning, but the victory felt tragic and hollow because these two species had enough in common to create and play a game together, yet one of them would die through nobody's fault. I told him the dream was emotionally striking to me and I thought I might be able to adapt this into a sci-fi short story.
He took the story as an HFY-style endorsement of alien genocide and was baffled by my attempts to explain that it was meant to be a bitter and ironic tragedy.
I also mentioned to this same friend that I thought it might be an interesting angle for a Star Wars story to be told from the view of an average grunt about how terrifying it would be to have their unit come under attack by a Force user. He went on a rant about how Force users were Mary Sues because "no one could possibly block a laser because they travel at the speed of light" and assumed that I must be a Star wars fan (we'd known each other for years and I'd never given any other possible indication that I liked Star Wars).
Look at conservative art. It's already extremely bad, so AI output isn't as jarring. If you think Taylor Sheridan is peak kino you're not gonna notice how many fingies are on the manly cool teevee man.
The right hates artists and academics but want to consume the products of their work. Replacing them with a robot is very appealing to them.
i seriously think it's mostly because of Silicon Valley making the hard right turn and being "adopted" into the greater Conservative Ecosystem
it's taken some effort but somehow they have fused "traditional values" with this all gas no brakes everything AI and tech ideology. i mean just look at Thiel fusing his version of Christianity with transhumanism
it's obviously all for self serving reasons; they want money and the right has all the money now. and since most right wingers are basically unthinking automatons they just go with the flow and the flow says AI is good. boss says it's good so it must be.
plus they hate artists because artists are woke and gay or whatever. but at the top, it's obviously a hatred of workers and humanity in general, which has just filtered down. the average right wing prole doesn't like to think about that part and just ignores it or assumes because they use AI they will not be ground into bio-diesel to fuel some Yarvinist data center.
It’s a bit snotty of me to say this but, yeah, it’s not surprising that the “conservatives” are amenable to a thing that does the thinking for them. It’s more of what they’re used it.
Because they're dumb fucking orangutans. Remember everyone is 12. They're also lazy. There's a shiny new toy that can do thinking for you. The people I know that are super into it are amazed at obvious scam bullshit. New tech that is obvious bullshit, bitcoin, nfts, all meat diet, gluten free diet (while not actually having an allergy), sports betting. It's all slop. It's no surprise they rely on the lying machine. They can be on top of all this shit, but try to point out all the fucked up shit with the world and I get "I don't know man, I don't really pay attention."
I also see AI art all over town on petite bourgeois independent contractor trucks. We all know why they do this.
Even google searches are slop / always were slop. So is wikipedia.
Having someone tell you what position you should have on any issue at the tip of your fingers / how they frame it is an extremely powerful tool for those that want to maintain the status quo
Because conservatism and AI enjoyment are both downstream from being dumb as shit and wanting an authority to think for you.
Yeah, it fits into their need for simple answers. And if something can't be answered simply, they'll invent a simple answer to it.
I think it's also that most conservatives don't give a fuck about art/human creation and will consume whatever slop is fed to them. I've seen some truly horrendous AI-generated halloween-themed garage banner things that are also JPEG as fuck. They just don't care about aesthetics and beauty and soul at all. I'm not sure the average conservative could even pass a Turing test.
Slop appeals to the most braindead and uncreative amongst us
AI is essentially the fresh coil of dog shit on the sidewalk that all of the hitlerites on social media have immediately swarmed on like flies to churn out propaganda with
Yeah, like "AI" has always been ChudTech.
I think they also get a kick out of it because in a sense, AI is a sign they've got what they wanted out of the culture wars. You know how at one time, sushi was considered this fancy new thing and libs would sniff their own farts over it? Yeah, AI is the conservative equivalent. Let me explain. It's this shiny, new, trendy thing that is supposedly embraced by those "in the know", and disliked by dumb rubes. This time, they get to feel like the intellectual cosmopolitan embracing the future and laughing at those uncultured provincials who just don't GET IT!
I wish AI was actually as good as people said it was. If I had a niche question and some advanced google search can explain any concept in parts? Great! If IDEs corrections were so advanced that all the silly mistakes any coder makes were a thing of the past? Awesome!
every "haha you artists are gonna have to become plumbers now and AI is gonna kill woke" person i've met hasn't just been a conservative but a full-blown neo-nazi
Same reason they listen to shitty conservative podcast #162927. A bootlicking machine that constantly validates your bigoted ideology? Of course they love it
There's AI positive people on this site.
And dbzero is probably majority AI positive, and not conservative at all
Too many, in fact.
raises hand
It's a tool that's not going anywhere no matter hard the bubble bursts. It's a way to generate a lot of propaganda quickly as well as endlessly debate chuds without requiring us to spend time getting hurt by their BS. We should figure out how to use this tool to flood the social media sites with our talking points. The right is already doing this, with accounts putting out fake videos playing on the old "welfare queen" tropes...
You're right of course, but wasting your breath here because people have a hard time separating prediction from praise. They don't want to win, only whine.
Yes. I think OP's analysis is more a reflection of the particular online spaces they tend to visit or pay attention to.
I think what's undeniable though is how eagerly the right is adopting AI for propaganda. I think it's also just easier for them to do so because they just make shit up to start with and the slop machine is perfect for that use case.
"more receptive" not "exclusively receptive"
It makes sense when you consider what is the technology being used for and why. They try to appeal a lot to your everyday people: "Look we gave you access to being able to create art on the level of professionals, we gave you access to so much information and made it easy to access. People will tell you to not use it because they're jealous, don't listen to them, they'll tell you the art is bad because they envy you."
The people behind the tech see both incentive for profit, but also a means to become the new cultural establishment. The people who use the kind of tech either do it cause it's just the trend being pushed now, they want to make money or they wanted to learn/create in the past already, but for whatever reason gave up and now see this as a chance to pursue where they once failed.
Mind you I am a disabled artist and I try to keep my biases to myself when approaching subjects on modern technological developments, but there's no such thing as an unbiased views so I thought i should mention that this tech has already effected me negatively.
I think part of it is that boomers are really into AI, and boomers tend to be more conservative. Techbros are also really into it, and also tend to be more conservative.
Conservatives are also the people in power, so AI companies and their executives are incentivized to appeal to conservatives, so that they'll get more benefits from the government.
Art, writing, etc. are also more liberal degrees/professions, and they are some of the people most harmed by AI right now. They have a material interest in opposing it.
(talking mostly about image generation here, but I think the general concept also applies to text, "facts" from ai research machines, etc)
I think it's more about a person's orientation towards art than anything else. Conservatives tend to openly hate artists, whether by way of sneering at someone who learns art in college or by openly raging against any artist who has an opinion. That's why people with actual artistic chops who are nevertheless conservative are such fascinating outliers, the average conservative's artistic capability is more on the level of Dry Bones than Ben Garrison.
So for someone with this attitude, image generation achieves everything they want from art - a surface level depiction of {thing} with nothing else attached to it - at scale, speed, and ease. It's a paradigm shift for them, while those who are more critical of art will immediately clock everything about a generated image that is uncanny, cliche, or incongruent.
All of this is correlations, of course, not ironclad laws.
depends, tech bros were always jealous of bohemian and artistic types and they were always right wing, at least since 90s.
i think lefties are overtly skeptical of llm themselves, it's fine to blow off your boss with llm written letter if need arises, why bother with thought in work? imagegens are just meme generators to communicate ideas with an image instead of written text, appropriate for comics nation.
talking to people you are on equal basis or care about is absolutely demented however. search is as well, but people can't use google anyway.
It's because the bourgeoise and the petty-bourgeoise are all-in on it.
The working class conservatives are people that have been tricked by these two classes. Their outlook on everything is filtered through the same influencers that have convinced them to be conservatives. When I use the term influencer here I don't just mean online streamers and youtubers, I mean all media sources, their religious influencers, their workplace influencers (boss), their local small business tyrant influencers, cops, and so on and so forth. The ecosystem of influencer that these proles who have been misled by the ruling classes is all pro-AI because the ruling class view it as beneficial.
They get their beliefs on AI from the same places they get their conservatism and that ultimately comes from the boug and the petty boug. If they trust them when they say that conservatism is in their best interests, why wouldn't they trust them on everything else they say?
They exist in an alternate information reality and that has to be changed in its entirety.
Ive definitely noticed. I find the strength of opinions odd on both sides. Like conservatives treat AI like is a genius human who has a phd in everything and thats dangerously false obviously. LLMs make basic spelling and math error ans frequently hallucinate misinformation.
On the other hand the libs tend to demonize the hell out of the tech. One example is the energy costs. There is a misconception about how much energy it takes to prompt an llm. Doing so is pretty damn low power cost. I have a 10 year old desktop i use as a server. If can run a 8 billion parameter version of gemni while it's also streaming my music on jellyfin no problem. The reason for this misconception is there is a huge amount of energy needed to train these thinga ans having every conpany and their mother make their own is a huge waste.
My personal opinion is that ai is like a a small team of dumbass interns. Its great for grunt work and busy work and thats about it. For example, one day my bosses boss decided that i need to update our approved software list with a paragraph description of each and every software listed. 900+ approved pieces of software and 400+ banned. He assigned this to me and one coworker. Told us its urgent.... Bullshit but i need my job. So google sheets has a function where you can point to another cell and add a seperate prompt for gemini to fill the cell based on it. Dude i had that whole list done in a minute. It was like commanding a small army of interns. Did the ai make up incorrect descriptions for like 50 pieces of software? Yes. Does it matter and do i give a flying fuck? No
If you made it this far, thanks for reading my 2 cents comrade 🫡
The anecdote you pointed out is just executives being executives.
Knowing that LLMs are being used to patch holes that may or may not come off under pressure because workers are so heavily exploited and abuse is not a point of confidence in this technology being used properly industry wide in the states.
Also 8B isn't really that much and is far, far away from what the AI companies offer. I think models like deepseek OCR will trivialize this at some point, but I think LLMs as a whole will be a comparative nothing burger while the industry is treating it like the messiah come again.
I think its not useful to target the tech itself like liberals do but to point out how this is part of a larger process of techopolies consuming more and more fake capital, which leads to the workers losing in the end. Micro transactions and gambling, freemium, SaaS, "smart" IOT, reducing right to repair, the applefication of the industry, all fall into the hole of where LLMs are falling into.
Nothing you said is wrong and i mostly agree. I talked about the 8b parameter one because it can even run on 10 year old hardware and is just as useful as the bigger ones to me. I think it starts to be diminishing returns. Like you said tech capitalists think its the messiah and put too much time money and resources into it. But also lile you said thats a problem with them and their view of the tech not the techbology itself.
You're actually wrong about the specifics of energy usage. The vast majority of energy usage of a model comes from its usage, not from its training. But you are right that the energy usage of an individual prompt is relatively small, roughly comparable to 15 or so Google searches.
The problem is when you process billions of prompts every day.
Had to google it to check but you are right. The sum of all the energy used to prompt a model over its life time is usually greater than whats need to train it to begin with.
I didn't know that but that makes sense. I meant more like prompting The Thing Once isnt that big of an energy drain where as the intial training is. Average of 0.34 watt-hours per prompt but to train GPT-3 cost around 1.3 gigawatt-hours (GWh) and GPT-4 requiring an estimated 62.3 GWh. I see all these memes about how prompting an llm once is super wasteful and thats the misconception i was addressing.
On the other hand the libs tend to demonize the hell out of the tech. One example is the energy costs. There is a misconception about how much energy it takes to prompt an llm.
I do wonder, how many prompts does it take to get what you want? And how many people input prompts in the same way I click a "clicky pen" when I get my hands on one and filled with nervous energy?
You and another commenter have good points about rhe bigger models ans how many prompts users hit them with. I think its dimishing returns after about 8 billion parameters and you can run those ones on old hardware. My home server is a 10 year old desktop. It cost $200 bucks to buy used last year and i didnt notice the energy costs. Me and my wife try to use it for any thing wed use an online one for instead. Probably only gets prompted about 10 times a week between the two of us.
🤔 I actually have a energy meter thing i could plug the the server into. I could do 100 prompts and tell you how much energy it ate for the day. Anybody interested?
me catching up on my inbox
Oh yeah! ... somewhere in the many drawers of gadgets and gizmos there is a Kill-A-Watt meter in my house that I was wanting to use for a project...
I'm currently considering trying to use a chatbot to semi-intelligently ocr a PDF to pull things out of a table and into a csv because it's like 400 entries, but then I keep thinking about how I'll have to check over that work and wondering if it's even worth trying to automate or if I should put on headphones with something upbeat and knock it out in an hour or two correctly.
The lack of correctness and the inability to trust it basically makes it useless for anyone who wants to do stuff right.
Yes. Short answer: progressive people are less stupid then conservatives and tend to appreciate to understand what's true vs not true.
Yes
I know for a fact the gooner contingent is into ai because for years they've promised themselves robot wives were going to be a thing very soon.