Rights group reports routine Facebook and Instagram censorship of pro-Palestine content using ‘six key patterns.’
Since the onset of the Israel-Gaza conflict on October 7, Meta has systematically and globally censored pro-Palestinian content, states a Human Rights Watch (HRW) report.
In a critical 51-page report, the organization extensively detailed over a thousand reported cases of Meta removing content and either suspending or permanently banning accounts across Facebook and Instagram. The company displayed “six key patterns of undue censorship” concerning content advocating for Palestine and Palestinians. These patterns encompassed the removal of posts, stories, and comments, the disabling of accounts, limitations on users’ interaction with others’ posts, and “shadow banning,” significantly reducing the visibility and reach of individuals’ content, as outlined by HRW.
The report cites instances from over 60 countries, primarily in English, where content peacefully supporting Palestine was targeted. Notably, even HRW’s own posts seeking examples of online censorship were flagged as spam, according to the report.
The group asserted in the report that the censorship of Palestine-related content on Instagram and Facebook is both systematic and global. They pointed to Meta’s inconsistent enforcement of its policies, attributing the erroneous removal of Palestine-related content to “erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals.”
In response to The Guardian, Meta acknowledged making frustrating errors but denied deliberate and systemic suppression of a particular voice. They challenged the claim of “systemic censorship” based on 1,000 examples, emphasizing the vast amount of content related to the conflict. Meta stated it is the only company globally to have publicly released human rights due diligence concerning issues related to Israel and Palestine.
The company’s statement asserted that the report overlooks the challenges of enforcing global policies during a rapidly evolving, highly polarized, and intense conflict. They emphasized that their policies aim to provide everyone a voice while ensuring platform safety.
The oversight board of Meta stated on Tuesday that the company made a mistake in removing two specific conflict-related videos from Instagram and Facebook. The board emphasized the value of these videos in “informing the world about human suffering on both sides.” One video depicted the aftermath of an airstrike near al-Shifa hospital in Gaza on Instagram, while the other showed a woman being taken hostage during the October 7 attack on Facebook. Following the board’s decision, the clips were reinstated.