

News Meta’s new content policies risk fuelling more mass violence, genocide: Amnesty Int.l
February 22, 2025
Amnesty International warned recent content policy announcements by Meta pose a grave threat to vulnerable communities globally and drastically increase the risk that the company will yet again contribute to mass violence and gross human rights abuses.
The human rights organization’s report details how Meta’s adjustments, which include relaxing restrictions on certain types of harmful speech, could create a breeding ground for hate speech, incitement to violence, and the spread of dangerous misinformation.
On January 7, founder and CEO Mark Zuckerberg announced a raft of changes to Meta’s content policies, seemingly aimed at currying favour with the new Trump administration. These include the lifting of prohibitions on previously banned speech, such as the denigration and harassment of racialized minorities. Zuckerberg also announced a drastic shift in content moderation practices – with automated content moderation being significantly rolled back. While these changes have been initially implemented in the US, Meta has signalled that they may be rolled out internationally. This shift marks a clear retreat from the company’s previously stated commitments to responsible content governance.
Amnesty’s report, which published on February 17, 2025, stated that the new policy adjustment will ultimately contributing to real-world harm and potentially fuelling atrocities in already fragile and conflict-affected societies, including Ethiopia.
Meta claims to enact the changes are to advance freedom of expression, according to the report.
However, the report argues that relaxing restrictions on harmful speech, including hate speech and incitement to violence, can inflame tensions, incite hatred, and ultimately contribute to real-world violence.
Amnesty’s report further points to the potential for these changes to be exploited by those seeking to spread disinformation and manipulate public opinion, further destabilizing already fragile situations.
“Meta’s algorithms prioritize and amplify some of the most harmful content, including advocacy of hatred and misinformation,” reads the report. “With the removal of existing content that safeguards, it looks as though the situation will set to go from bad to worse.”
According to the report, Meta’s recklessness has previously led to severe consequences.
It states that in 2017, Myanmar security forces carried out a brutal campaign of ethnic cleansing against Rohingya Muslims, attributing this to Meta’s failure to act responsibly.
It also argues that Meta has not learned from its past failures, which have contributed to mass violence. “Rather than learning from its reckless contributions to mass violence in countries including Myanmar and Ethiopia, Meta is instead stripping away important protections that were aimed at preventing any recurrence of such harms,” reads the report.
The report also warns that Meta’s policy adjustments risk deepening societal divisions and fuelling further conflict in already affected regions.
The report calls on governments and regional bodies to hold Meta accountable for its human rights impacts.
Amnesty’s warning is based on Meta’s past failures in content moderation, as documented in its earlier report dated October 31, 2023.
The report also detailed how Meta’s inadequate enforcement of its policies during the Tigray conflict in Ethiopia contributed to human rights abuses. According to the 2023 report, hate speech and incitement to violence on Meta’s platforms, particularly Facebook, played a role in exacerbating the conflict in the region.
Elsabet Samuel (PhD), a researcher and human rights expert, said that Meta’s new policy adjustment is a wake-up call, signalling the need for immediate action.
She said that companies like Facebook are driven by market value and Facebook is leveraging the current political climate as an opportunity to introduce its new policy adjustment.
“As I see it, the US new government accepts the role that TikTok played during its election campaign as it features a multimedia approach that grasps the youth,” she said. “These changes have forced Facebook to adjust its marketing strategy by introducing the new policy system to reach broader audiences that were not previously within the platform’s target.”
According to Elsabet, Meta uses freedom of expression as a pretext to expand its market when introducing new policies.
However, she argued that content moderation cannot be effectively managed by algorithmic changes in countries like Ethiopia. She warned that this shift would lead to an increase in hate speech, misinformation, and disinformation, particularly in the context of ongoing conflicts in the country.
“The impact of social media in such a political landscape is clear. The upcoming situation is very alarming and worrisome, requiring urgent government attention ahead,” she told The Reporter.
In November 2021, Facebook introduced a new content moderation system called “Lock Your Profile” in response to mass killings and conflicts in Ethiopia and Myanmar in 2019 and 2020.
In Ethiopia, its introduction followed the 2019 assassination of renowned singer Hachalu Hundessa.
However, the war in Tigray and the atrocities committed in the region led to the deployment of human content moderators in three languages: Amharic, Afan Oromo, and Tigrigna.