It’s been two days that The New York Times published a bombshell report claiming that Facebook knew about the Russian interference in 2016 and discussed the platform’s failure to identify and deal with the hate speech, violence, and fake news.
Right after the brutal report, Facebook issued a lengthy response to the claims and tried to clear itself from all this mess. Today Facebook has published its biannual transparency report, explaining the moves and efforts the company done in this year to make platform safe, secure and free of hate speech and violence.
The report covers “information about government requests for user data we’ve received; reports on where access to Facebook products and services was disrupted; the number of content restrictions based on local law; and reports of counterfeit, copyright, and trademark infringement.”
The report covered the timeline from April through September of 2018.
The transparency report also includes Facebook’s latest Community Standards Enforcement Report. In it, the platform mentions its efforts in removing content that violates Facebook policies, including “adult nudity and sexual activity, fake accounts, hate speech, spam, terrorist propaganda, and violence and graphic content,” and two new categories as well:
- bullying and harassment, and
- child nudity and sexual exploitation of children
There’s still a lot to unfold here about the crisis and scandals Facebook going through, nonetheless, Facebook says it removed over 1.5 billion fake and spam accounts from April through September 2018, up from the 1.3 billion accounts it deleted in the previous six months of the ongoing year.
Facebook claims that it has demolished 90 percent cases of fake accounts, spam, adult nudity, and sexual activity, child nudity, sexual exploitation of children, terrorist propaganda, violence and graphic content. However, the company also admits there are two categories where its content moderation does not do well. Facebook only identified and removed 14.9 percent of bullying and harassment cases before users reported them, and identified only 51.6 percent of hate speech violations before users reported them.
This is the first time Facebook is reporting on bullying and harassment so we would expect that the number would rise as the company puts more efforts behind it. As compared to last year, Facebook’s efforts are worth appreciating as in Q4 of 2017, the company only caught 23.6 percent of hate speech before user reported it and a good thing is this number that has more than doubled as of today’s report.