0
Kafka, but they make police procedurals about it.
(i.imgur.com)
dank memes
Rules:
All posts must be memes and follow a general meme setup.
No unedited webcomics.
Someone saying something funny or cringe on twitter/tumblr/reddit/etc. is not a meme. Post that stuff in !the_dunk_tank@www.hexbear.net, it's a great comm.
Va*sh posting is haram and will be removed.
Follow the code of conduct.
Tag OC at the end of your title and we'll probably pin it for a while if we see it.
Recent reposts might be removed.
No anti-natalism memes. See: Eco-fascism Primer
A few years back the Danish Data Protection Agency found out that there was several major errors in the cell phone data the police had been using in criminal cases to establish the location of offices and by extension their owners. The system would sometimes mess up the locations of a caller and the phone receiving the call.
The agency also criticised the police for having done nothing to check the quality of data before using it in court. The courts and prosecutors and defenders also deserves blame as they never challenged the quality of the data.
All the non-computer people just assumed that when the computer said something then it must also be true. Why wouldn't it be, after all computers are magic and incorruptible truth-machines.
And even then cell phone data is often thought to be more precise than it really is. Often it is assumed that a phone connects to the nearest cell phone mast but that is not always the case, especially in built-up areas where the signal from the closest mast is not always the strongest. These data can have up to 30 km of inaccuracy.
Yeah, you can trust a computer if you're know what you're doing and not. Societally, though? Fucking hell no.
Poland had some AI driven nonsense for their benefits claims for a while. Except of course it was always, in the end, a human who said "yeah do whatever". Except basically none of them ever did, it got so bad with it they just tossed it.
>As presented in last year’s report, in May 2014, the Ministry of Labor and Social Policy introduced a simple ADM system that profiles unemployed people and assigns them three categories that determine the type of assistance they can obtain from local labor office. Panoptykon Foundation, and other NGOs critical of the system, have been arguing that the questionnaire used to evaluate the situation of unemployed people, and the system that makes decisions based on it, is discriminatory, lacks transparency, and infringes data protection rights. Once the system makes a decision based on the data, the labor office employee can change the profile selection before approving the decision and ending the process. Yet according to official data, employees modify the system’s selection in less than 1% of cases. This shows that, even if ADM systems are only being used to offer suggestions to humans, they greatly influence the final decision.