Extracts:
Although prior research has often found that conservatives tend to be skeptical of new technologies, these findings reveal a more complex pattern: when AI recommendations appear to reflect a person’s own previous choices, conservatives are more inclined to follow them—driven by a broader preference for consistency and resistance to change.
Across a series of controlled online experiments, participants were asked to imagine or respond to AI-generated recommendations for movies, music, or recipes. In some cases, they were told the recommendation was based on their own past preferences. In others, this detail was omitted or changed—such as when the recommendation was intentionally described as novel or different from what the user usually consumed.
Participants also rated their political ideology on a scale from liberal to conservative. The researchers then analyzed how likely each group was to accept or follow the AI-generated suggestion.
In contrast to the widespread assumption that conservatives are more skeptical of new technologies, the studies consistently found that conservatives were more likely than liberals to accept AI-generated recommendations—but only under specific conditions.
The effect was strongest when participants believed that the AI recommendation was based on their own past behavior, such as previous music choices or favorite movie genres.
The findings shed light on an important psychological factor influencing AI adoption, but they do not suggest that conservatives are universally more enthusiastic about AI. The studies focused on low-stakes, everyday consumption contexts, where familiarity and consistency are appealing. Other research has shown that in high-stakes settings—such as medical decisions or autonomous vehicles—conservatives may remain more cautious or skeptical toward AI.
Something similar crossed my mind, and I thought it wasn't too crazy to like what you like. I guess their main point was that conservatives follow the trend more closely, whereas liberals are evenly split. What I wish they had questioned was the reason for liberals being less trusting. Maybe they're more conscious of environmental impacts or tend to see through AI for what it is rather than thinking of it as a magical black box? Would've been good to know.
I will cop to not having fully read the original source, but going off the excerpts posted, my takeaway was that participants were told they were getting AI generated recommendations--the difference was whether it was explicitly stated that they were based off their previous consumption/preferences or if they were not explicitly told this fact.
IMO generating recommendations for new content to explore is actually one of the better use cases for a LLM, since reading and distilling tons of organic online conversations from people who have expressed similar interests to me is exactly how I would ideally go about it.
But yeah, back to the main point...maybe there is more to it, but this seems like a relatively neutral point that can be spun in either direction depending on which echo chamber you're in. For example, imagine r/conservative being all "those silly liberals hate AI so much that they would rather get random recommendations than ones they were told come from AI"