There is no doubt that it takes a huge effort to moderate all the content that gets uploaded to Facebook. But over the past few months, the social giant has shown signs of strain.
Back in August, shortly after the company fired a team of human editors overseeing the Trending section of the site in favor of an algorithm, a false news story found its way to the top of the queue.
In February, CEO Mark Zuckerberg published a wide-ranging open letter on his Facebook page about the direction he hopes to take the company, touching on the need for more vigilance in the face of “fake news” and also a stronger infrastructure to handle the raft of content that is posted by users on a daily basis.
Related: After Murder, Facebook to Hire 3,000 People to Review Videos
“There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us,” Zuckerberg wrote. “There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more.”
This spring, after a murder in Cleveland was livestreamed on the platform, Zuckerberg announced that over the course of the year, 3,000 people would be hired to better tackle and improve that review process.
But now, an investigation conducted by the Guardian has identified some of the standards that Facebook operates from when it comes to moderating content, and they are perhaps more confusing than you might expect.
Related: Facebook Wants to Help You Spot Bogus News Stories
With regard to the videos of violent deaths or suicides, they are designated as disturbing content, but the reasoning Facebook has for not necessarily taking them down is because they can build awareness about mental illness, according to The Guardian’s findings.
Specifically in cases of suicide, documents that The Guardian has been privy to explain that the current company dictate is “to remove them once there’s no longer an opportunity to help the person. We also need to consider newsworthiness, and there may be particular moments or public events that are part of a broader public conversation that warrant leaving up.”
When it comes to violent language, a call to action to harm the president would be taken down because he is a head of state, but directions about how to snap a woman’s neck would be allowed to remain on the site because it is not “regarded as credible threats.”
Related: Facebook Pledges to ‘Do Better’ After Posting of Murder Video
For instances of animal abuse and graphic violence, those images and videos are also designated as disturbing, but are allowed if they are being used to educate and raise awareness, but they are not if there is an element of “sadism and celebration.” For images or photos pertaining to child abuse, that rule is also applied.
According to the Guardian, moderators often have seconds to make a determination about how to characterize or whether to remove the content.
It’s clear that Zuckerberg and his team have a daunting task in front of them, so Facebook’s rules will need to constantly evolve to meet the challenge.
Source link