Facebook recognizes that its AI cannot remove all content from dio

The great controversy of Facebook continues. A new article published by The Wall Street Journal highlights that Facebook knows that its AI is not capable of eliminating even a minimum part of the hate content or violence. Specifically, because algorithms cannot detect or differentiate a series of content, such as first-person videos, racist speeches, car accidents or even cockfights.

Facebook, with its objective of minimizing the damage, has confirmed that the social network has achieved decrease hate speech by nearly 50% over the past three years. According to Guy Rosen, Facebook’s VP of Integrity, much of this drop has been due to “enhanced and expanded AI systems.”

The documentation leaked by the US newspaper, however, reveals that Facebook dispensed with part of the human team in charge of detecting fake news, violent content or hate speech. Instead, it carried out a series of actions that reduced the number of contents and they attributed it to the use of artificial intelligence. However, some Facebook employees estimate that the use of these technologies barely removes 3-5% of the content that incites hatred.

Now, what about the rest of the content that clearly violates the rules of the social network, but that the AI ​​is not able to detect correctly? Simply less frequently displayed, but not removed. Facebook confirms it.

Facebook AI shows suspicious content less often

Rosen, responding to WSJ’s statements, assures that “focusing only on the removal of content is the wrong way to see how Facebook fights hate speech.” It also highlights that its technology can reduce the distribution of suspicious content. But is it enough?

We have a high threshold to automatically delete content. If we didn’t, we would risk making more mistakes in content that looks like hate speech but is not, harming the very people we are trying to protect, such as those who describe experiences with or condemn hate speech.

Guy Rosen, Vice President of Integrity at Facebook.

WSJ claims that there is content that even Facebook’s AI is not able to differentiate and labels it incorrectly. For example, the Christchurch shooting in New Zealand was recorded live by the perpetrator in the first person. The AI ​​detected some videos posted by different users such as “Paintball games” or “car wash”, causing them to be shown in the feed of Internet users.