Pages

Tuesday, May 31, 2016

The EU Has A Plan To Get Rid Of Racist Comments On Social Media For Good. Will It Work?



BY LAUREN C. WILLIAMS MAY 31, 2016 4:37 PM


CREDIT: AP PHOTO/SILVIA IZQUIERDO, FILE



Tech juggernauts Facebook, Twitter, Google and Microsoft have agreed to take down hateful content within 24 hours of reviewing it, according to new conduct rules the European Commission announced Tuesday to align with national laws criminalizing violent online speech.

The new rules require social media companies to remove posts or disable accounts, when necessary, that reference violence or hatred targeting individuals or groups based on race, color, ethnicity, religion, or nationality. Tech companies must also develop and display clear guidelines outlining the sites’ prohibition of violent and hateful speech, and train employees on how to flag the illegal content.

The joint commitment between American tech companies and European Union comes after French Jewish youth group UEJF sued Facebook, Twitter, and Google for the companies’ failure to quell hateful and threatening speech on their platforms in the wake of the Islamic State’s (ISIS) attacks in Paris and Brussels.

Before Tuesday’s agreement, the EU has repeatedly called on internet companies to reduce ISIS’ propaganda on their platforms. Twitter suspended125,000 accounts linked to ISIS over the last year to combat the group’s widening influence on the microblogging site. Facebook began working with the German government last year to moderate anti-refugee xenophobic speech following the country’s acceptance of nearly a million Syrian and Afghani war refugees.But content moderation is difficult to execute due to the sheer volume of content published online each day.

Social media companies are criticized for unevenly moderating content that is violent, sexually suggestive, or that which is perceived potentially offensive. Twitter unrolled a slew of policy changes last year to address years of criticism that the company doesn’t take harassment seriously, and Facebook banned graphic contentmonths after video of ISIS beheading photojournalist James Foley spread online.

Content moderation, in general, is difficult to execute due to the sheer volume of content published each day. For example, 300 hours of video are uploaded to YouTube each minute, and at least 10 percent of Facebook’s billion users updates their status daily. But the practice also raises ethical and moral concerns, including censorship and varying or frequently changing standards. As a result, tech companies have shied away from heavy-handed moderation, in favor of a case-by-case approach.

Despite pressure to reduce threatening and harassing speech, social media companies have been able to dodge legal mandates for content moderation in the United States. A provision tucked in the 2016 intelligence fiscal authorization bill that would require companies to report terrorism-related social media posts to intelligence agencies is awaiting a Senate vote nearly a year after it was first approved by committee.

Additionally, the U.S. Supreme Court ruled in favor of a man who posted threatening language stylized as rap lyrics targeting his estranged wife. The decision was a win for free speech supporters and a blow to civil liberties advocates who were looking to the ruling to set a precedent that would outlaw violent speech.

The EU’s new rules, however, could be an impetus and blueprint needed to push for broader legal and policy changes elsewhere.




No comments:

Post a Comment