Facebook Fails Anew In Detecting Hate Speech Ads
The test couldn't have been any easier — and yet Facebook failed.
Photo Insert: 12 text-based advertisements that used dehumanizing hate speech to call for the murder of people from Ethiopia's three main ethnic groups: Amhara, Oromo, and Tigrayans were created.
Facebook and its parent company Meta failed yet again in a test of how well they could detect clearly violent hate speech in advertisements submitted to the platform by the nonprofit organizations Global Witness and Foxglove, Barbara Ortutay reported for the Associated Press (AP).
The hateful messages were centered on Ethiopia, where internal documents obtained by whistleblower Frances Haugen revealed that Facebook's ineffective moderation is "literally fanning ethnic violence," as she stated in congressional testimony in 2021.
Global Witness conducted a similar test with hate speech in Myanmar in March, which Facebook also failed to detect.
The group created 12 text-based advertisements that used dehumanizing hate speech to call for the murder of people from Ethiopia's three main ethnic groups: Amhara, Oromo, and Tigrayans.
The ads were approved for publication by Facebook's systems, just as they were with the Myanmar ads. The ads were never published on Facebook.
This time, however, the group informed Meta of the undetected violations. The company stated that the ads should not have been approved and emphasized its work "building our capacity to catch hateful and inflammatory content in the most widely spoken languages, including Amharic."
A week after hearing from Meta, Global Witness submitted two more ads for approval, both of which contained blatant hate speech. The two advertisements, which were written in Amharic, Ethiopia's most widely spoken language, were approved.