this post was submitted on 13 Mar 2024
123 points (100.0% liked)

Technology

37717 readers
305 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.

all 40 comments
sorted by: hot top controversial new old
[–] ApathyTree@lemmy.dbzer0.com 66 points 8 months ago* (last edited 8 months ago) (1 children)

This would absolutely flag me for something. I tend to have flat delivery, low pitch, avoid eye contact, etc. and when combined with other metrics, could easily flag me as not being a happy enough camper.

I mean don’t get me wrong, I’m never going to be happy to be working, but if I showed up that day, I’m also in a good enough headspace to do my job… and if you want to fire me for that… for having stuff going on and not faking vocal patterns…

This is why I don’t want to work anymore. It’s gotten so invasive and fraught if you happen to be anything but a happy bubbly neurotypical fake. And that’s wildly stressful. I’m not a machine, and refuse to be treated like one. If that means I have to die in poverty, well, dump me in the woods, I guess.

This shit should never be legal.

[–] caseyweederman@lemmy.ca 46 points 8 months ago (2 children)

"Make Neurodiverse People Homeless Again"

[–] ranandtoldthat@beehaw.org 11 points 8 months ago (1 children)

This is another sign of what's already going on. It's getting into backlash territory.

[–] drwho@beehaw.org 7 points 8 months ago

"Our smart securicams don't trust you" is the new "You're not a good culture fit."

[–] joelfromaus@aussie.zone 3 points 8 months ago

Bring in Universal Basic Income. Introduce emotion tracking as job KPI. Fire me because I don’t emote per LLM datasets. Live comfortably unemployed.

Best dystopian outcome. A guy can dream, right?

[–] WaterLizard@beehaw.org 44 points 8 months ago* (last edited 8 months ago) (3 children)

This feels like the AI equivalent of men telling female workers to smile more. I'm totally sure that bias wasn't cooked into these algorithms. Honestly, how is this not profiling for neurodiverse individuals?

[–] intensely_human@lemm.ee 14 points 8 months ago (1 children)

It is. The only reason I, an autistic man, can feed myself is because at least some jobs are defined in terms of measurable output. As soon as a human is making a personal judgment about me, they see that I’m like an alien acting human, and they find a way to fire me.

As an Uber driver, or any other job where success is not based on my boss’s judgment, I kick ass.

People have no fucking ability to stand by any of the “diversity” crap they preach. Like, maybe if diversity is so important to you, the fact that my voice sounds slightly tighter than usual one day shouldn’t result in me getting “Does not meet expectations” on my review.

Can you tell I’m a little bitter about this?

Now these kids are trying to organize Uber drivers into some kind of union.

Please no! I only succeed because it’s gig work, because it’s independent contractor stuff. As soon as my benefits become codified, it becomes an employee situation, and I get put under the neurotypical microscope again.

I cannot survive there, and I don’t want to live on state aid. Free money is not a substitute for work.

[–] wathek@discuss.online 2 points 8 months ago (1 children)

Have you tried working for a small-mid size company? I got the same vibe in big companies, now im in a company of 50 people and they just do not care how weird i am, no middle managers trying to justify their existence, as long as you're doing your best you're good. Like i'm sure that doesnt apply to all small companies, but i'd certainly keep it in mind for the future

[–] intensely_human@lemm.ee 1 points 8 months ago (1 children)

I have tried everything. Uber driver (ie job without a boss) is the only thing where I can succeed.

I’ve worked at companies of every size from 3 to 10,000. My personality creates friction no matter how hard I try to fit in.

I’ve done therapy, ayahuasca ceremonies, yoga, zen, men’s work, neurofeedback training, rolfing, adderall, anxiety meds, microdosing LSD and psilocybin, low carb, keto, raw diet, kung fu, alcohol, marijuana, polyphasic sleep, you name it.

I’m 41 years old. My ability to adapt is declining. There is one little puddle where this fish can swim, and these busybody kids are trying to turn it into a clone of every other dirt pile out there.

I want my independent contractor gig work to just stay as it is. I just want these “Let’s break some eggs and make a big omelet for everybody!” kids to slow their fucking roll and have a little humility instead of trying to save everyone by replacing dignified autonomy with a comfortable spot under Momma’s wing.

[–] wathek@discuss.online 2 points 8 months ago (1 children)

Hmm, well i can't really speak to any of that. But for the greater good, unions are a good thing. I understand that it makes difficult for you though.

I'd throw the obvious stuff at you like, but it's kinda hard to get an idea of what could possibly help without knowing you.

I do know that i've had to change my attitude a few times in life to get by though, i don't think autism should be used as an excuse to not have to do the hard thing everyone has to do (but is harder for us).

Right now, the only advice i can give is to try to channel that resentment into motivation to improve yourself. Trying and failing is so much more valuable than just giving up and being angry about it.

But yeah, i do have it easier than most so maybe it's not my place to say things like that. I do wish you the best though.

I do feel the same way about things being easier alone though. i would be much happier and productive doing my own thing, I have a ton of software projects i work on, somd even make a bit of money, but running business seems scary since my administration skills are shit and customers are acary.

[–] Bebo@literature.cafe 5 points 8 months ago

Truly dystopian

[–] Dymonika@beehaw.org 2 points 8 months ago

It totally is. Prepare to be screwed!

[–] serpentofnumbers@lemmy.dbzer0.com 38 points 8 months ago (2 children)

how could someone view this as anything other than dystopian?

[–] MelodiousFunk@slrpnk.net 29 points 8 months ago (2 children)

It's easy if you're a sociopath.

[–] drwho@beehaw.org 9 points 8 months ago (1 children)

Which is what the twenty-first century might be selecting for. They're the only ones that seem like they're doing okay right now.

[–] MelodiousFunk@slrpnk.net 12 points 8 months ago (1 children)

I'd argue that the 20th century already selected for that and we are reaping the "rewards" as we speak.

[–] drwho@beehaw.org 2 points 8 months ago

I find it difficult to disagree with you.

[–] Bebo@literature.cafe 3 points 8 months ago (1 children)

Absolutely. They'll be the winners.

[–] Kichae@lemmy.ca 2 points 8 months ago

Already are

[–] FaceDeer@fedia.io 7 points 8 months ago (2 children)

There are a lot of lonely people without social support groups or who otherwise may not be willing or able to seek help when they need it. Having an AI that is in a position to go "hey, are you alright?" Could be a boon for those folks.

There are also situations where a worker could be a problem or even a danger to their co-workers, and having an AI that's able to pay attention and potentially intervene in those situations could help prevent trauma from happening in the first place.

I'm not saying this is what it'll be used for, just answering your question about how it could be viewed in a non-dystopian way.

[–] DarkThoughts@fedia.io 18 points 8 months ago (1 children)

Having an AI that is in a position to go "hey, are you alright?" Could be a boon for those folks.

Oh, thanks, I'm cured. Definitely well worth the constant breach of my privacy.

[–] FaceDeer@fedia.io 2 points 8 months ago (1 children)

Is that not the first step toward providing aid? Would you rather the AI simply issue a prescription or something?

Anyway, as I said, I'm not saying this is how it goes. I'm just presenting a view that's non-dystopian, as was explicitly asked for. The AI could easily be operating under rules that would prevent it from telling anyone else of the trouble it had detected until you give it permission, if that would satisfy your privacy concerns.

[–] DarkThoughts@fedia.io 7 points 8 months ago (1 children)

I'd rather not have an "AI" invade my privacy in general.

The AI could easily be operating under rules that would prevent it from telling anyone else of the trouble it had detected until you give it permission, if that would satisfy your privacy concerns.

What? That's not how those "AIs" work at all. lol

[–] FaceDeer@fedia.io 3 points 8 months ago

I'm not talking about any specific currently-existing AI, I'm talking about a hypothetical one. It is indeed possible to set up an AI in such a way that it wouldn't tell anyone else what's going on. It's just a computer program, it can be set up however one wants it to be set up.

[–] prex@aussie.zone 1 points 8 months ago

I'd rather have coworkers who give at least half a fuck. Just during work hours.

[–] PonyOfWar@pawb.social 26 points 8 months ago

I'm glad to live in a place where that kind of surveillance is already illegal. I recently read that in some places, it's already commonplace to track every single keystroke and mouse click on workers' PCs. That's bad enough even without putting AI and facial recognition into the mix. Truly dystopian.

[–] CanadaPlus@lemmy.sdf.org 24 points 8 months ago (1 children)

Wow, they're trying to make emotional fascism literal.

[–] intensely_human@lemm.ee 4 points 8 months ago (2 children)

What seems to be your boggle, citizen?

[–] Kolanaki@yiffit.net 4 points 8 months ago* (last edited 8 months ago)

I'm the enemy. Because I like to think, I like to read. I'm into freedom of speech, the freedom of choice. I'm the kind of guy who likes to sit in a greasy spoon and wonder - "Gee, should I have the T-bone steak or the jumbo rack of BBQ ribs with the side order of gravy fries?" I want high cholesterol. I wanna eat bacon and butter and buckets of cheese, okay? I wanna smoke a Cuban cigar the size of Cincinnati in the non-smoking section. I wanna run through the streets naked with green Jell-O all over my body reading Playboy magazine. Why? Because I suddenly might feel the need to, okay, pal?

[–] CanadaPlus@lemmy.sdf.org 2 points 8 months ago

As soon as I get a chance I plan to watch that movie.

[–] prex@aussie.zone 24 points 8 months ago
[–] farsinuce@feddit.dk 24 points 8 months ago* (last edited 8 months ago) (2 children)

Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.

A quick rundown of what it entails and why it might matter in the US:What is it?

  • The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety.
  • The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination.

Key Takeaways:

  • Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned.
  • High-Risk Regulation: AI systems with significant implications for people's rights must undergo rigorous assessments.
  • Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data.

Why Does This Matter in the US?

  • Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards.
  • Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally.
  • Potential for US Legislation: The EU's move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.

Emotion-tracking AI is covered:

Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


Sources:

[–] DarkThoughts@fedia.io 8 points 8 months ago

Definitely a good start. Surveillance (or ""tracking"") is one of those areas where ""AI"" is actually dangerous, unlike some of the more overblown topics in the media.

[–] melmi@lemmy.blahaj.zone 2 points 8 months ago* (last edited 8 months ago) (1 children)

Did you use AI to write this? Kinda ironic, don't you think?

[–] farsinuce@feddit.dk 10 points 8 months ago (1 children)

I spent the better half of 45 minutes writing and revising my comment. So thank you sincerely for the praise, since English is not my first language.

[–] melmi@lemmy.blahaj.zone 2 points 8 months ago

If you wrote this yourself, that's even more ironic, because you used the same format that ChatGPT likes to spit out. Humans influence ChatGPT -> ChatGPT influences humans. Everything's come full circle.

I ask though because on your profile you've used ChatGPT to write comments before.

[–] kbal@fedia.io 14 points 8 months ago

At last, the surveillance cameras will know it when I give them the finger.

[–] Bishma@discuss.tchncs.de 10 points 8 months ago

"Sentiment analysis" has been creeping into things like IVR and CRM systems for years now. I've been getting creeped out enough by that, I don't need to be constantly thinking that my work computer is trying to read my emotions.

[–] wathek@discuss.online 6 points 8 months ago

Hmm, how very 1984