this post was submitted on 07 Aug 2025
434 points (89.0% liked)

Technology

73801 readers
3770 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] floofloof@lemmy.ca 35 points 2 days ago* (last edited 2 days ago) (4 children)

Why is a statistical survey bullshit because of your personal view on the matter? Where does the survey imply that transgender, nonbinary and disabled people are the only ones who dislike AI?

The graphic shows that every group has attitudes that are somewhere between completely negative and completely positive. The groups mentioned are just a bit more negative than the others.

[–] Keyboard@lemmy.world 1 points 2 days ago

I’m a cis white autistic girl I should say a 7 Thanks for the sharing very interesting

[–] UnderpantsWeevil@lemmy.world -4 points 2 days ago (1 children)

It does feel a bit like the magazine is gunning for the "Don't like AI? What are you, queer?" angle.

[–] NoneOfUrBusiness@fedia.io 8 points 2 days ago (1 children)

The article contains nothing of the sort and I have no idea why you came to that conclusion.

[–] UnderpantsWeevil@lemmy.world 0 points 2 days ago (1 children)

I believe that a future built on AI should account for the people the technology puts at risk.

I've seen various iterations of this column a thousand times before. The underlying message is always "AI is going to get shoved down your throat one way or another, so let's talk about how to make it more palpable."

The author (and, I'm assuming there's a human writing this, but its hardly a given) operates from the assumption that

identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories

but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.

[–] NoneOfUrBusiness@fedia.io 1 points 2 days ago

This is a reasonable point, but it's also not what you said previously.

[–] NocturnalMorning@lemmy.world -3 points 2 days ago (1 children)

Because it singles people out for no reason. There is absolutely no reason to do a study like this that focuses on marginalized groups. Does this study make these marginalized groups lives better somehow by putting this information out there? Not a chance.

Research for the sake of doing research is assinine, and its rampant in academia. We have a publish or perish attitude in academia that is so pervasive its sickening...ask me how I know that (my partner is a professor)

And we basically all but force people to write papers and try to come up with novelty to justify their existence as a professor.

AI is a scourge on this earth in how we use it today. We didnt need a study to tell us that, much less to single out a few groups of people, who frankly dont need to be singled out anymore than they already have been by the Trump administration.

[–] astutemural@midwest.social 24 points 2 days ago* (last edited 2 days ago) (1 children)

I mean, would you not want to do this specifically to see its effects on marginalized groups? That seems like a pretty good reason to me.

[–] NocturnalMorning@lemmy.world 13 points 2 days ago (1 children)

Admittedly, I didnt read the article. I think the research is actually beneficial after reading the article, and its exactly the kind of research I think should be done on AI.

Spoke prematurely based on the headline, go figure...

Props to you for admitting you spoke prematurely