Meta evaluates Arabic content moderation, lacks similar system for Hebrew.
Meta is facing challenges moderating content related to the Israel-Palestine conflict, particularly in Hebrew, despite recent policy changes, according to newly revealed documents.
These internal guidelines, shared with the Guardian by a former Meta employee who worked on content moderation, detail a complex process for handling conflict-related content. However, the documents suggest that Meta, which owns Facebook, Instagram, and WhatsApp, lacks equivalent systems for assessing the accuracy of moderation in Hebrew and Arabic.
The former employee, who remains unnamed due to credible fears of professional retaliation, claims that Meta’s hate speech policies concerning Palestine are biased, a view supported by Palestinian advocates.
The employee also noted that some workers involved in moderating content about the conflict are hesitant to voice concerns, fearing repercussions, as highlighted in a recent letter signed by over 200 Meta employees. This, the former employee added, suggests that the company’s priorities may not truly focus on ensuring community safety.
The documents, updated this spring, come amid growing criticism of Meta and other social platforms for their handling of the divisive Israel-Palestine conflict, where language and moderation decisions during rapidly unfolding news events can have serious consequences. In June, 49 civil society organizations and several prominent Palestinians sent a letter to Meta, accusing the company of “aiding and abetting governments in genocide” through its content moderation policies.
“When Palestinian voices are silenced on Meta platforms, it has a direct and dangerous impact on Palestinian lives,” said Cat Knarr of the US Campaign for Palestinian Rights, which organized the letter. “People don’t hear about what’s happening in Palestine, but they do hear propaganda that dehumanizes Palestinians. The consequences are very real and very harmful.”
Criticism of Meta’s content moderation disparities by language is longstanding. Facebook whistleblower Frances Haugen testified before a US Senate committee that, although only 9% of the platform’s users are English speakers, 87% of its misinformation budget was focused on English-language content.
Meta disputes this figure, stating that most of its third-party fact-checking partners review content from outside the United States and that the percentage does not accurately reflect its efforts to combat misinformation.
Included in these policies are Meta’s rules on hate speech and the boycott movement. According to internal documents, the policies mandate the removal of statements like “boycott Jewish shops” and “boycott Muslim shops,” but permit the phrase “boycott Arab stores.”
Tracy Clayton, a Meta spokesperson, explained that “during this crisis,” Meta’s policy is to remove calls for boycotts based solely on religion but to allow boycotts of businesses “based on protected characteristics like nationality,” as they are often “tied to political speech or intended as protest against a government.”
As a result, phrases like “boycott Israeli shops” or “boycott Arab shops” are allowed. The internal documents clarify this policy further, stating that phrases like “no Israeli goods should be allowed here until they stop committing war crimes” and “boycott Arab stores” are permitted.
Assessing the effectiveness of Hebrew hate speech moderation
The recent documents provide new insight into Meta’s ability to assess the quality of its content moderation in both Arabic and Hebrew.
Meta has a system in place to track the “policy precision” of content enforcement across many languages. This means that Meta’s quality control system, which involves human experts, reviews the work of frontline moderators and systems to see how well their decisions align with the company’s content policies on Facebook and Instagram. The program then generates an accuracy score to monitor the effectiveness of content moderation across different platforms, according to the documents and a former employee.
This review system is in place for languages like English, Spanish, Arabic, and Thai. However, for some Hebrew-language content decisions, such scoring was deemed “unfeasible” due to a “lack of translation,” the documents reveal. The former employee attributed this to a shortage of human reviewers with expertise in Hebrew.
Meta claims it has “multiple systems in place” to measure enforcement accuracy for Hebrew-language content, including evaluations by Hebrew-speaking reviewers and auditors.
However, the documents indicate that there is no “policy precision” metric for Hebrew-language enforcement. The former employee noted that, because Hebrew is not integrated into the system, enforcement reviews in the Hebrew market are conducted on an “ad hoc” basis, unlike the more structured approach in the Arabic market.
Meta watchdogs claim that the new documents reviewed by the Guardian reveal that, despite the introduction of new Hebrew language classifiers, there is insufficient effort to ensure these recent measures are effective, allowing disparities in enforcement to continue.
“This reporting demonstrates that Meta is not fully committing to its content moderation responsibilities,” said Nashif of 7amleh.