Moderating Facebook Content: Is AI The Answer?


With a valuation of just under $775-billion today, the publicly traded tech giant Facebook has had a banner year. The most recently published earnings report showed the company was up 22% from Q3 2019. CEO Mark Zuckerberg stated in the earnings call in October that people and businesses continue to rely on the platform ‘to stay connected and create economic opportunity during these tough times.’ While the company has clearly not struggled financially throughout the pandemic of 2020, Facebook is facing tough times in another area. Moderation. 

Zuckerberg faced a Senate investigation committee this week, answering questions on Facebook’s moderation policies. Senators also took the opportunity to discuss the contentious Section 230 of the Communications Decency Act that addresses offensive material on the internetOn the same day, a group of dissatisfied Facebook content moderators released an open letter on what they say is an unsafe working environment, airing complaints about their mandatory return to the office during the pandemic and insufficient mental health resources in light of their daily moderation of content containing “masses of violence, hate, terrorism, child abuse, and other horrors.”  

The letter came just days after Facebook announced it is changing how humans and AI collaborate to moderate content, giving AI greater responsibility. Chris Palow, a software engineer in Facebook’s interaction integrity team, told a reporter from The Verge that AI systems are used only when they have shown that they can be as accurate as human reviewers. “The system is about marrying AI and human reviewers to make less total mistakes,” said Palow. “The bar for automated action is very high,” he continued, although he conceded that AI moderation is never going to be perfect. 

In June of this year, NYU Stern published a report that called Facebook’s moderation attempts ‘grossly inadequate.’ An analysis of that paper by Venture Beat noted that “the study acknowledged that all the major social media platforms are suffering from the same content moderation problem. While Facebook has about 15,000 content moderators, most of them work for third-party vendors. That’s compared to about 10,000 moderators for YouTube and Google and 1,500 for Twitter, according to the study. And while Facebook has also partnered with 60 journalist organizations to implement fact-checking, the number of items sent to these groups far exceeds their capacity to verify most claims.”  

In light of the heavy criticism it is facing, it seems understandable that Facebook is turning to technology to ease the burden from unhappy human moderators, and automate parsing through the oversupply of inappropriate content by using machine learning techniques. Those that had been paid around $15 an hour to do the bulk of the work, however, were not pleased with the shift toward AI. “You sought to substitute our work with the work of a machine,” reads the open letter to Zuckerberg, COO Sheryl Sandberg, and the CEOs of Accenture and CPL — firms that retain outsourced moderators. “Facebook undertook a massive live experiment in heavily automated content moderation.”

For its part, Facebook says it has more than 35,000 people working on safety and security issues, and more than half of those are content reviewers. The Facebook Community Standards page states that it removes hate speech and harmful content relating to gender, race, ethnicity, caste, nationality, sexual orientation, disability, and disease. “To do this, we use a combination of artificial intelligence and review by people on our Community Operations teams,” Facebook says. “In the majority of cases, we’re able to detect and remove violating content before anyone reports it, including content in private groups.” Additionally, Facebook says that nudity, bullying, and terrorist content are banned from the platform and that proactive detection technology is used to find and remove prohibited multimedia. The company is adamant however that human content moderators and AI techniques work hand in hand. “We invest in technology, processes, and people to help us act quickly so violating content finds no home in our community,” Facebook states. “While we continue to improve these technologies, people are a huge part of the review process too, as context is often a big factor in determining whether something does or doesn’t go against our Community Standards.”

Artificial Intelligence is being implemented in industries all over the world and is a central theme of the research undertaken at UCIPT. Our work in the HOPE study is using data to assess and shift behavioral outcomes among HIV and other populations.

Leave a comment

Your email address will not be published. Required fields are marked *