434
Transgender, nonbinary and disabled people more likely to view AI negatively, study shows
(theconversation.com)
This is a most excellent place for technology news and articles.
Why is a statistical survey bullshit because of your personal view on the matter? Where does the survey imply that transgender, nonbinary and disabled people are the only ones who dislike AI?
The graphic shows that every group has attitudes that are somewhere between completely negative and completely positive. The groups mentioned are just a bit more negative than the others.
I’m a cis white autistic girl I should say a 7 Thanks for the sharing very interesting
It does feel a bit like the magazine is gunning for the "Don't like AI? What are you, queer?" angle.
The article contains nothing of the sort and I have no idea why you came to that conclusion.
I've seen various iterations of this column a thousand times before. The underlying message is always "AI is going to get shoved down your throat one way or another, so let's talk about how to make it more palpable."
The author (and, I'm assuming there's a human writing this, but its hardly a given) operates from the assumption that
but fails to consider that the problem is employing a rigid, impersonal, digital tool to engage with a non-uniform human population. The question ultimately being asked is how to get a square peg through a round hole. And while the language is soft and squishy, the conclusions remain as authoritarian and doctrinaire as anything else out of the Silicon Valley playbook.
This is a reasonable point, but it's also not what you said previously.
The whole thing is done is bad faith to make a correlation that isn't there. I just conducted a study that says people are always cats. My study doesn't show any actual correlation but I think I once heard that a cat man exists so there is potential for study.
Because it singles people out for no reason. There is absolutely no reason to do a study like this that focuses on marginalized groups. Does this study make these marginalized groups lives better somehow by putting this information out there? Not a chance.
Research for the sake of doing research is assinine, and its rampant in academia. We have a publish or perish attitude in academia that is so pervasive its sickening...ask me how I know that (my partner is a professor)
And we basically all but force people to write papers and try to come up with novelty to justify their existence as a professor.
AI is a scourge on this earth in how we use it today. We didnt need a study to tell us that, much less to single out a few groups of people, who frankly dont need to be singled out anymore than they already have been by the Trump administration.
I mean, would you not want to do this specifically to see its effects on marginalized groups? That seems like a pretty good reason to me.
Admittedly, I didnt read the article. I think the research is actually beneficial after reading the article, and its exactly the kind of research I think should be done on AI.
Spoke prematurely based on the headline, go figure...
Props to you for admitting you spoke prematurely