Facebook parent company Meta’s special-track content review platform for VIP individuals and businesses potentially causes harm to the public and appears to exist to satisfy Meta business concerns, rather than protecting safe and fair speech, an Oversight Board report found.
The board recommendations come at a time when rival network Twitter is grappling with content moderation issues of its own, in the wake of Elon Musk’s acquisition of the social media platform. And it shows that there’s concern over how VIPs on Facebook received different treatment, in terms of how their posts were moderated, than regular users.
In 2020, Meta, then known as Facebook, established an Oversight Board at the direction of CEO Mark Zuckerberg. It weighed in on the banning of former President Donald Trump, in the wake of the Jan. 6 insurrection.
The existence of the special VIP review program called “cross-check,” or XCheck, was first reported by The Wall Street Journal in Sept. 2021, as part of a broader expose by the Journal into whistleblower Frances Haugen’s allegations.
In a 57-page report, the Board excoriated what it found to be a program that promoted an unequal system that offered “certain users greater protection than others.” The program delayed the removal of content that potentially violated Meta’s rules, and failed to even establish how effective the special-track program was, compared to standard content moderation processes.
The report found that potentially offensive content could remain on the site for hours, possibly even days, if the user was part of the special VIP program.
Meta told the Oversight Board that it “does have a system that blocks some enforcement actions outside of the cross-check system.”
That system, called “technical corrections” internally, are automatic exceptions for a preselected list of content policy violations for a certain group of users. Meta processes “about a thousand technical corrections per day.”
For most users, content moderation on Facebook and Instagram was historically straightforward. Potentially problematic content is flagged, either by automated processes or when a human reports questionable content, and then a decision is made by an outsourced contractor or automated algorithm on the nature of the content.
But for a privileged few, the cross-check program activated a different, more human process.
For those “entitled entities,” the first step was a review by a specific team of Meta employees and contractors who had a degree of “language and regional expertise” on the content they were moderating. This wasn’t an opportunity that the general public enjoyed, though.
In Afghanistan and Syria, for example, the average review time for reported content was 17 days, in part because Meta at times has struggled to hire language experts globally.
The content was then reviewed by “a more senior” panel of Meta executives, which included leaders from communications and legal teams.
At the final level, “the most senior Meta executives” could be involved if the company faced significant legal, safety or regulatory risk.
That seniormost level could also be activated if there was a degree of urgency, with “consequences to the company” possible. It wasn’t clear who made a decision to fast-track a content review process to global leadership.
Meta overhauled the content review process for the general public in 2022 in the aftermath of the Journal’s initial reporting,
Now, after initial detection and review, content is triaged by an “automatic” process to decide whether or not it needs further review.
If it requires a deeper examination, Meta employees or contractors will engage in a deeper examination, and can potentially escalate to the highest level available to the general public, the “Early Response Team,” which will make a final decision on enforcement actions.
In the report, Meta’s Oversight Board provided over two dozen recommendations on fixes to the cross-check program. The first recommendation was to divide Meta’s content review system into two streams: one to fulfill Meta’s “human rights responsibilities,” and another to protect users that Meta considers a “business priority.”
Other recommendations involved firewalling government relations and public policy teams from content moderation, establishing a clear set of public criteria for inclusion on cross-check or successor lists, and broadening the appeal process to almost all content.
A Meta representative did not immediately respond to a request for comment.