HomeLatest NewsFacebook researchers found its services are used in India to spread religious hatred: Report

Facebook researchers found its services are used in India to spread religious hatred: Report

New Delhi: A report prepared by Facebook’s researchers found that WhatsApp was awash with “rumours and calls to violence” during the Delhi riots of February 2020, while users who were interviewed said they were frequently subjected to “a large amount of content that encourages conflict, hatred and violence on Facebook and WhatsApp”, according to a Wall Street Journal report.

The report, the latest in a series by the newspaper based on internal Facebook communications, says Facebook is aware that its services are used in India to spread religious hatred. It also highlights the problems that Facebook whistleblower Frances Haugen cited, including that Hindutva groups spread anti-Muslim content but are protected due to ‘political sensitivities’.

The Wall Street Journal says the internal documents “offer an unparalleled look at how its rules favour elites, its algorithms breed discord, and its services are used to incite violence and target vulnerable people”.

In a July 2020 internal report titled ‘Communal Conflict in India Part 1’, researchers identified three ‘key crisis events’ in India across 2019 and 2020. The first was the protests against the Citizenship Amendment Act (CAA), during which time researchers said misinformation, rumours, demonising content and hate speech spiked “300% above previous levels” on Facebook platforms.

The newspaper also said that during the Delhi riots, rumours and calls to violence, particularly on WhatsApp, were identified. The third crisis event began with the pandemic, when Facebook services had fear-mongering content that blamed Muslims for the spread of COVID-19.

The researchers wrote that a Hindu man in Delhi told them he received frequent messages on Facebook and WhatsApp “that are all very dangerous,” such as “Hindus are in danger, Muslims are about to kill us.”

A Muslim man in Mumbai said there is “so much hatred going on” on Facebook that he feared for his life. “It’s scary, it’s really scary.” He said that “if social media survives 10 more years like this, there will be only hatred,” according to the WSJ report. If the company does not moderate content that encourages conflict, India will be a “very difficult place to survive for everyone,” he said. Most users considered Facebook responsible for moderating this content, the report says.

The WSJ report also reviewed the internal document titled ‘Adversarial Harmful Networks – India Case Study’, which Haugen cited in her complaint to the US Securities and Exchange Commission (SEC).

As The Wire had reported, that document showed Facebook was aware that anti-Muslim narratives targeted pro-Hindu populations with violent and incendiary intent. It specifically accused RSS groups, users and pages of promoting fear-mongering.

“There were a number of dehumanizing posts comparing Muslims to ‘pigs’ and ‘dogs’ and misinformation claiming the Quran calls for men to rape their female family members,” it said.

The report also found that Facebook lacks the technical ability to track this misinformation and divisive content in local Indian languages, saying, “Our lack of Hindi and Bengali classifiers means much of this [anti-Muslim] content is never flagged or actioned.”

Former Facebook employee and whistleblower Frances Haugen testifies during a Senate hearing on Capitol Hill, Washington, US, October 5, 2021. Photo: Matt McClain/Pool via Reuters

According to the WSJ report, the researchers recommended Facebook invest more in resources to build out underlying technical systems that are supposed to “detect and enforce on inflammatory content in India,” the way human reviewers might. They also suggested creating a “bank” of inflammatory material to study what people were posting, and creating a reporting system within WhatsApp to allow users to flag specific offending messages and categorise them by their contents.

The newspaper said that another internal document, prepared this year, found that the Bajrang Dal had “previously used WhatsApp to organize and incite violence.” It had been considered for designation as a dangerous group, which would result in a permanent ban, and was recommended to be taken down. However, Bajrang Dal remains active on the social media platform.

This suggests that Bajrang Dal enjoys protection due to its close links with the BJP, as an earlier WSJ report had said that Facebook had baulked at removing group from the platform after its security team said that “cracking down on the group might endanger both the company’s business prospects and its staff in India and risk infuriating” the BJP. The newspaper had also reported that India’s top public policy executive had opposed applying hate-speech rules to BJP politicians, again citing political considerations.

Facebook spokesperson Andy Stone told WSJ that some of the reports were “working documents containing investigative leads for discussion”, rather than “complete investigations, and didn’t contain individual policy recommendations”. He claimed that the company had invested significantly in technology to find hate speech across languages, and such content has been declining. The Wire

Rate This Article:
No comments

leave a comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.