On Tuesday, Facebook Inc. reported a clear hike in the number of posts removed across its apps for promoting violence and hate speech. The increase was attributed to technological advancements in automatically identifying text and images.
The social media company deleted in the first quarter around 4.7 million posts connected to hate organizations on its main app, arriving to 1.6 million in the fourth quarter of2019. Facebook also removed 9.6 million posts containing hate speech, when in the preceding period were 5.7 million.
It was registered an increase of sixt time the hateful content removals since the 2017 third quarter, that marks the earliest period of which Facebook illustrated the data.
The company also reported that warning labels were put on about 50 million COVID-19 related content, after taking the decision to ban misinformation related to the coronavirus in the early phases of the pandemic.
Facebook disclosed the data as part of the 5th Community Standards Enforcement Report, that was introduced in 2018 along with stricter community rules in response to the criticism over the company policy content on its platforms, including Facebook’s Messenger and WhatApp mobile apps.
The report was extended last year to include information about the enforcing rules on photo-sharing app Instagram, specifying on Tuesday that the data will be released on a quarterly basis.
The company said that the improvements in the technology to detect text embedded in images and videos enabled, among the other things, the proactive removal of more drug-related and sexually exploitative content.
Due to the shortage of available moderators during the pandemic, Facebook has increased the role of automated tools to supervise content as conspiracy theories about the coronavirus.