Facebook Removes Only A Fraction Of Hate Groups Flagged By Activists
acebook wants its 2 billion-plus users to know it’s making every effort to remove hate speech from its platform, but some advocacy groups and activists want more.
Take, for example, the Southern Poverty Law Center, the nonprofit legal advocacy organization known for its litigation against white supremacist groups and its ongoing identification of hate groups within the United States. Even before ProPublica’s July report on Facebook’s cut-and-dry formula for identifying hate speech, the SPLC reported more than 200 hate groups active on the social network. About fewer than 10, the organization estimates, have been removed.
Many these reported still-live pages revel in “pro-White” rhetoric while simultaneously rejecting their classification as a hate group, sometimes even calling out the SPLC itself. Other remaining content includes the Facebook group associated with the white supremacist organization cited in the manifesto of mass shooter Dylann Roof, who killed nine people at a historic black church in Charleston, N.C., as well as multiple Holocaust denial pages.
Still, creating a standard, global definition of hate speech leaves Facebook, like many social media platforms, with an overwhelming task. Social media platforms must balance questions of censorship and online safety, all while fielding concerns and requests from both government and civil institutions.
But it’s hard to know where Facebook finds that balance, as the social media company has historically revealed little information about how its Community Standards are enforced.
“I don’t understand their definition of hate speech,” Heidi Beirich, an SPLC expert in internet extremism, told Forbes. Beirich, who focuses on white supremacist, nativist and neo-Confederate movements, describes Facebook’s guidelines, as reported by ProPublica, as “extremely confusing.” The social network’s standards “[are] not really understanding what hate speech actually is,” she also said, noting that, in her view, Facebook is quick to remove ISIS or terrorism-related posts while much slower to target neo-Nazi groups.
That’s not to say Facebook doesn’t care. Globally, Facebook has deleted about 280,000 posts per month reported as hate speech for the last two months, explained Richard Allan, Facebook vice president of public policy in Europe, the Middle East and North Africa, in a company post defending Facebook’s hate speech policies, published just a day before ProPublica’s report. As governments like Germany and Austria threaten the social media platform with fines for hate speech, Facebook must reckon with these questions more seriously than before.
If a platform decides to remove hate speech, the task is easier said than done. While some advocacy groups say Facebook outperforms peer platforms in removing hateful content, others want the company to do more. Complicating the issue is the fact that many groups accused of hate speech believe they are engaged in legitimate political commentary and that should be protected by the principles of free speech. The job itself is also difficult. Thousands of content monitors – likely outsourced from third-party agencies, as no content moderation jobs seem to be listed on Facebook’s Community Operation’s employment opportunities page- filter hate speech, as well as images of extreme violence, pornography, and terrorist activity. Facebook content moderators in charge of tracking terrorist activity on Facebook had their personal profiles leaked online, while content moderators at Microsoft claim to have exhibited symptoms of PTSD.
Facebook content moderators measure whether content is hate speech with a simple formula: “Protected category + attack = Hate Speech,” according to ProPublica. Protected categories include sex, religious affiliation, national origin, gender identity, race, ethnicity, sexual orientation and serious disability or disease, while non-protected categories include social class, “continental origin,” appearance, age, occupation, political ideology, religions and countries. How Facebook navigates between the protected category of “religious affiliation” and the non-protected category of “religions” is unclear.
At the same time, according to ProPublica, Facebook will allow offensive claims that specify a sub-group within a protected class, such as “radicalized Muslims” or “black children,” while it will ban attacks aimed at groups phrases “whites” or “white men,” as gender and race are both considered protected subgroups by the social media company.
The issue of modifiers within a protected category is one place where the SPLC and Facebook seem to disagree. “If you make a [hateful] statement about black children, that would certainly be a racist statement,” Hankes says, and would contribute to an organization’s categorization as a hate group by the SPLC.
A neutral view of hate speech also ignores that “particular classes are facing more oppression than others,” says Beirich.
One challenge, though, is that hate speech isn’t always easy to identify, notes Jonathan Greenblatt, national director and CEO of the Anti-Defamation League, an advocacy group which fights antisemitism and other forms of bigotry. Through humor, coded language and symbols, hateful extremists can sneak content onto social media sites. Groups use memes, images like Pepe the Frog, gifs and even parentheses to convey bigoted messages.
On ways to improve, Greenblatt advocates that social media companies engage with stakeholders like the ADL to “augment their understanding and help them to kind of deal with cyber hate and other forms of harassment.” He also says that training must constantly evolve, as hate speech constitutes a dynamic challenge that requires adequate and consistently-updated training. He says that the ADL is in contact with Facebook on a “nearly weekly basis regarding hate activity on their platform, and new efforts by the alt-right to try to circumvent their community guidelines.”
“[Facebook] faces daunting challenges, not the least of which is because they have 2 billion users but they have something like over a million pieces of content posted each other,” says Greenblatt. “And white supremacists and other bigots are using other techniques to make their hateful posts more difficult to detect.”
Yet most of the content reported by the SPLC to Facebook clearly violates the standards published on Facebook’s site, says Keegan Hankes, an SPLC analyst. Pages and groups flagged by the Southern Poverty Law Center, sent to Facebook’s policy team, range from KKK-related organizations to groups that market themselves as official research groups and institutions. The SPLC remains frustrated much of this content has not been removed, and wonders whether Facebook has a differentiated policy for removing Facebook pages, as opposed to groups.
There’s also the danger of Facebook censorship going too far, according to some.
Facebook censorship has allegedly hit Chechnya-based dissidents, a Hong-Kong activist commemorating Tiananmen Square, a Pulitzer-Prize winning journalist covering corruption in Malta and an abortion-rights group. The deletions are usually reversed following criticism by the media, but highlight gaps in Facebook’s approach to content moderation. Facebook has deleted posts from activists and journalists from Palestine, Kashmir, Crimea and Western Sahara, according to ProPublica.
Shaun King, a prominent activist and journalist, also had a post about racism removed by Facebook, though the company later apologized and removed the block.
One solution, SPLC’s Beirich suggests is that Facebook use other methods of tracking hate speech, including tracking users associated with hate groups, rather than focusing on particular posts.
“It’s a complicated problem to solve,” say Greenblatt. In regards to Facebook, he says Facebook is “ far less hospitable to organizations and hate groups than they had been in the past, and they’re far less hospitable than many other tech companies,” he says.
By the end of 2017, Facebook says it will hire 3,000 additional content moderators to ensure that its 2 billion users adhere to the social media company’s “Community Standards.” Yet, if these moderators continue like Facebook’s 4,500 current moderators, they could be accidentally censoring journalists and activists, while giving a pass to hate groups reported to Facebook by the Southern Poverty Law Center.
Whether social media companies should be in the business of censoring users is still being worked out, both legally and culturally, but Facebook, as a social media company diving head first into regulating hate speech on its platforms, is struggling. Of course, narrowing hate speech down to a simple formula isn’t so simple. Court decisions and laws, usually available to the public (unlike Facebook’s internal decisions), have reckoned with the question for decades. A growing movement to fine Facebook for not removing hate speech that violates the law complicates things further by potentially incentivizing preemptive censorship. There is also concern that allowing social media companies to define the boundaries of acceptable and unacceptable discourse encourages the privatization of free expression.