Skip to content

Racism and Algorithms: Part 2

    Previously we have written about the problematic relationship between algorithms and law enforcement, specifically what happens when algorithms are supposed to predict crime in advance of it happening (precrime). Today we will briefly mention another case of racism in algorithms by going over a recent scandal, which the social media giant Twitter has already apologised for, admitting that they were in the wrong and promising change. But apologising for what?

    For those who are unfamiliar, Twitter is a social media platform where members can post short messages (280 characters or fewer) called ‘Tweets’, tweets can be accompanied by pictures or short videos. As Twitter is based on short messages, it is intended to allow users to navigate to the next message very quickly. To enable this, when users post a tweet involving an image that has not properly been cropped, Twitter’s algorithm will automatically crop it, allowing a user scrolling by to see the most relevant part of the picture (as determined by Twitter’s algorithm). Images such as this one:

    Image

    Were cropped like this (credit to Twitter user @NotAFile):

    In every case where a white face was presented along with a non-white face, the Twitter algorithm would automatically crop out the non-white face. This was the case even with fictional Simpsons characters, Lenny and Carl.

    Credit to @_jsimonovski

    Or with dogs who had white fur, and dogs who had black fur.

    Credit to @MarkEMarkAU

    Twitter maintains that their algorithmic software was tested for implicit bias (racial or gendered) before publication, but admitted that these flaws mean there is more work to do in this respect. But this leaves many unanswered questions about how the bias came to exist in the first place, why it universally seems to prefer white faces / images even in nonhuman creatures and what kind of safety process will be implemented to ensure it does not happen again.