Moderating Facebook Content: Is AI The Answer?

Facebook says it has more than 35,000 people working on safety and security issues, and more than half of those are content reviewers. The Facebook Community Standards page states that it removes hate speech and harmful content relating to gender, race, ethnicity, caste, nationality, sexual orientation, disability, and disease. “To do this, we use a combination of artificial intelligence and review by people on our Community Operations teams,” Facebook says. “We invest in technology, processes, and people to help us act quickly so violating content finds no home in our community.”