129
submitted 2 weeks ago by rimu@piefed.social to c/foss@beehaw.org

Our results show that women's contributions tend to be accepted more often than men's [when their gender is hidden]. However, when a woman's gender is identifiable, they are rejected more often. Our results suggest that although women on GitHub may be more competent overall, bias against them exists nonetheless.

you are viewing a single comment's thread
view the rest of the comments
[-] rbn@sopuli.xyz 50 points 2 weeks ago* (last edited 2 weeks ago)

Anyone found the specific numbers of acceptance rate with in comparison to no knowledge of the gender?

On researchgate I only found the abstract and a chart that doesn't indicate exactly which numbers are shown.

edit:

Interesting for me is that not only women but also men had significantly lower accepance rates once their gender was disclosed. So either we as humans have a really strange bias here or non binary coders are the only ones trusted.

edit²:

I'm not sure if I like the method of disclosing people's gender here. Gendered profiles had their full name as their user name and/or a photography as their profile picture that indicates a gender.

So it's not only a gendered VS. non-gendered but also a anonymous VS. indentified individual comparison.

And apparantly we trust people more if we know more about their skills (insiders rank way higher than outsiders) and less about the person behind (pseudonym VS. name/photography).

[-] Danterious@lemmy.dbzer0.com 2 points 2 weeks ago* (last edited 2 weeks ago)
[-] rbn@sopuli.xyz 9 points 2 weeks ago

Thank you. Unfortunately, your link doesn't work either - it just leads to the creative commons information). Maybe it's an issue with Firefox Mobile and Adblockers. I'll check it out later on a PC.

[-] stoy@lemmy.zip 13 points 2 weeks ago

Looking at their comment history they seem to allways include that link to the CC license page in some attempt to prevent the comments from being used with AI.

I have no idea of if that is actually a thing or just a fad, but that was the link.

[-] rbn@sopuli.xyz 8 points 2 weeks ago* (last edited 2 weeks ago)

Thanks for pointing that out.

Seems like a wild idea as... a) it poisons the data not only for AI but also real users like me (I swear I'm not a bot :D). b) if this approach is used more widely, AIs will learn very fast to identify and ignore such non-sense links and probably much faster than real humans.

It sounds like a similar concept as captchas which annoy real people, yet fail to block out bots.

[-] stoy@lemmy.zip 3 points 2 weeks ago

Yeah, that is my take as well, at first I thought it was completely useless just like the old Facebook posts with users posting a legaliese sounding text on their profile trying to reclaim rights that they signed away when joining facebook, but here it is possible that they are running their own instance so there is no unified EULA, which gives the license thing a bit more credibillity.

But as you say, bots will just ignore the links, and no single person would stand a chance against big AI with their legal teams, and even if they won the AI would still have been trained on their data, and they would get a pittance at most.

[-] AbraNidoran@beehaw.org 5 points 2 weeks ago

Page 15 of the pdf has this chart

(note the vertical axis starts at 60% acceptance rate)

[-] TigrisMorte@kbin.social 2 points 2 weeks ago

60% acceptance rate baseline? Doubt!

[-] Sas@beehaw.org 2 points 2 weeks ago

Their link wasn't to the paper but to the license to poison possible AIs training their models on our posts. Idk if that actually is of any use though

load more comments (16 replies)
this post was submitted on 19 May 2024
129 points (100.0% liked)

Free and Open Source Software

17372 readers
68 users here now

If it's free and open source and it's also software, it can be discussed here. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS