Skip to content

HECAT

Disruptive Technologies Supporting Labour Market Decision Making

Menu
  • HECAT Home
  • About HECAT
  • Work Packages
  • Deliverables
  • Carlow Project
  • Contact Us
  • Blog
Menu

Racism and Algorithms: Part 2

Posted on October 12, 2020 by admin

Previously we have written about the problematic relationship between algorithms and law enforcement, specifically what happens when algorithms are supposed to predict crime in advance of it happening (precrime). Today we will briefly mention another case of racism in algorithms by going over a recent scandal, which the social media giant Twitter has already apologised for, admitting that they were in the wrong and promising change. But apologising for what?

For those who are unfamiliar, Twitter is a social media platform where members can post short messages (280 characters or fewer) called ‘Tweets’, tweets can be accompanied by pictures or short videos. As Twitter is based on short messages, it is intended to allow users to navigate to the next message very quickly. To enable this, when users post a tweet involving an image that has not properly been cropped, Twitter’s algorithm will automatically crop it, allowing a user scrolling by to see the most relevant part of the picture (as determined by Twitter’s algorithm). Images such as this one:

Image

Were cropped like this (credit to Twitter user @NotAFile):

In every case where a white face was presented along with a non-white face, the Twitter algorithm would automatically crop out the non-white face. This was the case even with fictional Simpsons characters, Lenny and Carl.

Credit to @_jsimonovski

Or with dogs who had white fur, and dogs who had black fur.

Credit to @MarkEMarkAU

Twitter maintains that their algorithmic software was tested for implicit bias (racial or gendered) before publication, but admitted that these flaws mean there is more work to do in this respect. But this leaves many unanswered questions about how the bias came to exist in the first place, why it universally seems to prefer white faces / images even in nonhuman creatures and what kind of safety process will be implemented to ensure it does not happen again.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

About This Site

HECAT: Disruptive Technologies Supporting Labour Market Decision Making is a Horizon 2020 funded research collaborative in the Societal Challenge 6 (SC6) category. The funded research will run from February 2020 to February 2023. The research is co-ordinated by Waterford Institute of Technology and is supported by partners across Europe: Employment Service of Slovenia, University of Ljubljana, Copenhagen Business School, Platform Networking for Jobs, Roskilde University, Sciences Po, Josef Stefan Institute & Tecnalia.

©2021 HECAT | Built using WordPress and Responsive Blogily theme by Superb