The Washington PostDemocracy Dies in Darkness

Outside audit says Facebook restricted Palestinian posts during Gaza war

The company-commissioned audit is one of the first insider accounts of the failures of a social platform during wartime

Updated September 23, 2022 at 4:24 p.m. EDT|Published September 22, 2022 at 6:42 p.m. EDT
A boy looks out one of the homes that are heavily damaged by airstrikes in Beit Hanoun, Gaza, on Aug. 29, 2021. (Salwan Georges/The Washington Post)
5 min

An independent audit of Meta’s handling of online content during the two-week war between Israel and the militant Palestinian group Hamas last year found that the social media giant had denied Palestinian users their freedom of expression by erroneously removing their content and punishing Arabic-speaking users more heavily than Hebrew-speaking ones.

The report by the consultancy Business for Social Responsibility, is yet another indictment of the company’s ability to police its global public square and to balance freedom of expression against the potential for harm in a tense international context. It also represents one of the first insider accounts of the failures of a social platform during wartime. And it bolsters complaints from Palestinian activists that online censorship fell more heavily on them, as reported by The Washington Post and other outlets at the time.

“The BSR report confirms Meta’s censorship has violated the #Palestinian right to freedom of expression among other human rights through its greater over-enforcement of Arabic content compared to Hebrew, which was largely under-moderated,” 7amleh, the Arab Center for the Advancement of Social Media, a group that advocates for Palestinian digital rights, said in a statement on Twitter.

The May 2021 war was initially sparked by a conflict over an impending Israeli Supreme Court case involving whether settlers had the right to evict Palestinian families from their homes in a contested neighborhood in Jerusalem. During tense protests about the court case, Israeli police stormed the Al Aqsa mosque, one of the holiest sites in Islam. Hamas, which governs Gaza, responded by firing rockets into Israel, and Israel retaliated with an 11-day bombing campaign that left more than 200 Palestinians dead. Over a dozen people in Israel were also killed before both sides called a cease fire.

Throughout the war, Facebook and other social platforms were lauded for their central role in sharing firsthand, on the-ground narratives from the fast-moving conflict. Palestinians posted photos of homes covered in rubble and children’s coffins during the barrage, leading to a global outcry to end the conflict.

But problems with content moderation cropped up almost immediately as well. Early on during the protests, Instagram, which is owned by Meta along with WhatsApp and Facebook, began restricting content containing the hashtag #AlAqsa. At first the company blamed the issue on an automated software deployment error. After The Post published a story highlighting the issue, a Meta spokeswoman also added that a “human error” had caused the glitch, but did not offer further information.

The BSR report sheds new light on the incident. The report says that the #AlAqsa hashtag was mistakenly added to a list of terms associated with terrorism by an employee working for a third-party contractor that does content moderation for the company. The employee wrongly pulled “from an updated list of terms from the US Treasury Department containing the Al Aqsa Brigade, resulting in #AlAqsa being hidden from search results,” the report found. The Al Aqsa Brigade is a known terrorist group (BuzzFeed News reported on internal discussions about the terrorism mislabeling at the time).

As violence in Israel and Gaza plays out on social media, activists raise concerns about tech companies’ interference

The report, which only investigated the period around the 2021 war and its immediate aftermath, confirms years of accounts from Palestinian journalists and activists that Facebook and Instagram appear to censor their posts more often than those of Hebrew-speakers. BSR found, for example, that after adjusting for the difference in population between Hebrew and Arabic speakers in Israel and the Palestinian territories, Facebook was removing or adding strikes to more posts from Palestinians than from Israelis. The internal data BSR reviewed also showed that software was routinely flagging potentially rule-breaking content in Arabic at higher rates than content in Hebrew.

The report noted this was likely because Meta’s artificial intelligence-based hate speech systems use lists of terms associated with foreign terrorist organizations, many of which are groups from the region. Therefore it would be more likely that a person posting in Arabic might have their content flagged as potentially being associated with a terrorist group.

In addition, the report said that Meta had built such detection software to proactively identify hate and hostile speech in Arabic, but had not done so for the Hebrew language.

The report also suggested that — due to a shortage of content moderators in both Arabic and Hebrew — the company was routing potentially rule-breaking content to reviewers who do not speak or understand the language, particularly Arabic dialects. That resulted in further errors.

The report, which was commissioned by Facebook on the recommendation of its independent Oversight Board, issued 21 recommendations to the company. Those include changing its policies on identifying dangerous organizations and individuals, providing more transparency to users when posts are penalized, reallocating content moderation resources in Hebrew and Arabic based on “market composition,” and directing potential content violations in Arabic to people who speak the same Arabic dialect as the one in the social media post.

In a response. Meta’s human rights director Miranda Sissons said that the company would fully implement 10 of the recommendations and was partly implementing four. The company was “assessing the feasibility” of another six, and was taking “no further action” on one.

“There are no quick, overnight fixes to many of these recommendations, as BSR makes clear,” Sissons said. “While we have made significant changes as a result of this exercise already, this process will take time — including time to understand how some of these recommendations can best be addressed, and whether they are technically feasible.”

How Facebook neglected the rest of the world, fueling hate speech and violence in India

In its statement, the Arab Center for Social Media Advancement (7amleh) said that the report wrongly called the bias from Meta unintentional.

“We believe that the continued censorship for years on [Palestinian] voices, despite our reports and arguments of such bias, confirms that this is deliberate censorship unless Meta commits to ending it,” it said.