In India, Facebook is working on an amplified version of the problem – Silicon Valley

On February 4, 2019, we saw Facebook researchers create a new user account and experience social media sites as a resident of Kerala, India.

For the next three weeks, your account will operate with simple rules. Join a group, watch a video, and explore new pages on your site, following all the recommendations generated by Facebook’s algorithm.

The result was a flood of hate speech, misinformation, and violence celebrations, recorded in an internal Facebook report published later that month.

“Following this test user’s news feed, I’ve seen more images of the dead in the last three weeks than I’ve seen in my entire life,” wrote a Facebook researcher.

This report was one of dozens of surveys and notes written by Facebook employees working on the impact of the platform on India. They underscore one of the most serious criticisms human rights activists and politicians have made against global businesses. It cannot develop resources to move and act in a country without a complete understanding of its potential impact on the culture and politics of the region. If something goes wrong.

With 340 million people using Facebook’s various social media platforms, India is the company’s largest market. And Facebook’s problem in the subcontinent shows an amplified version of the problem facing the world, exacerbated by a lack of resources and a lack of 22 officially recognized language expertise in India. increase.

Internal documents obtained by a consortium of news organizations, including the New York Times, are part of a larger cache of material called Facebook Paper. They were collected by former Facebook product manager Frances Haugen, who became a whistleblower and recently testified to the Senate subcommittee about the company and its social media platform. References to India were scattered in documents submitted by Hogen to the Securities and Exchange Commission earlier this month.

The document contains reports of how bots and fake accounts are tied to the country’s ruling party, and that opposition figures have caused havoc in national elections. Facebook CEO Mark Zuckerberg also plans to focus on “meaningful social interaction,” that is, interaction between friends and family, but more misinformation in India, especially during a pandemic. We will also elaborate on the method that was brought about.

[ns-video id=”e38a8aeea61ecb7e43e8db9257fc76d” publisher=”21905″]

According to the document, Facebook did not have enough resources in India to tackle issues introduced in India, such as anti-Islamic posts. 87% of the company’s global budget for the time spent sorting out incorrect information is allocated to the United States, while North American users make up only 10% of social networks, while 13% elsewhere. Only secured. Daily active users, according to a document describing Facebook’s resource allocation.

Facebook spokesman Andy Stone said the numbers were incomplete and did not include the company’s third-party fact-checking partners, most of them outside the United States. The biased focus on the United States has affected many countries other than India. According to company documents, Facebook has taken steps to eliminate false information during the November elections in Myanmar, including disinformation shared by Myanmar’s military junta.

The company rolled back these measures after the election, despite findings that it reduced the number of views of inflammatory posts by 25.1% and the number of photos with false information posted by 48.5%. Three months later, the military carried out a violent coup in the country. After the coup, Facebook said it had implemented a special policy to eliminate praise and support for domestic violence and later banned Myanmar troops from Facebook and Instagram.

In Sri Lanka, people were able to automatically add hundreds of thousands of users to their Facebook group and expose them to offensive content that provokes violence. In Ethiopia, a group of nationalist youth militias have successfully coordinated Facebook’s call for violence and posted other incendiary content.

According to Stone, Facebook has invested heavily in technology to find hate speech in a variety of languages, including two of the most widely used languages: Hindi and Bengali. He added that Facebook has cut the amount of hate speech people see around the world in half this year.

“Hate speech to groups left out of society, including Islam, is increasing in India and around the world,” says Stone. “Therefore, we are improving enforcement and working to update our policies as hate speech evolves online.”

In India, Facebook “has definitely asked questions about resources,” but the answer isn’t just “spend more money on the problem,” said Katie Harbath, who spent 10 years as director of public policy at Facebook and worked directly. Stated. Secure Indian national elections. She said Facebook needs to find a solution that can be applied to countries around the world.

Facebook employees have been conducting various tests and field surveys in India for several years. That work increased before India’s 2019 general election. In late January of that year, several Facebook employees met with colleagues and visited the country to talk to dozens of local Facebook users.

According to a memo written after the trip, one of the key demands from Indian users is that Facebook “takes action against real-world harm, especially the kind of false alarms related to political and religious tensions.” It was that.

Ten days after researchers opened fake accounts to study false information, a suicide bomber in the conflict border area of ​​Kashmir was a series of violence and condemnation, false information, between Indian and Pakistani people. Caused a surge in conspiracy.

After the attack, anti-Pakistan content began to spread in a Facebook-recommended group of researchers. According to her, many of the groups had tens of thousands of users. According to another Facebook report published in December 2019, Facebook users in India tend to join large groups, with a median group size of 140,000 in the country.

The group she participated in circulated a graphic post containing a meme showing the decapitation of Pakistani people and corpses wrapped in white sheets on the ground.

After the researchers shared her case study with a colleague, her colleague commented in a report posted that she was concerned about false information about the upcoming elections in India.

Two months later, after India’s national elections began, Facebook took a series of steps to stop the flow of false information and hate speech in India, according to an internal document called India’s Election Case Study. ..

Case studies are optimistic about Facebook’s efforts, including the addition of fact-checking partners (third-party outlet networks that Facebook is working with to outsource fact-checking) and the increasing amount of false information removed. I drew. He also mentioned how Facebook created a “political whitelist to limit public relations risk.” This is essentially a list of politicians who have received a special exemption from fact checking.

The survey does not address the major issues the company faced with bots in India, such as voter oppression. During the election, Facebook witnessed a surge in bots (or fake accounts) linked to various political parties and efforts to disseminate false information that could affect people’s understanding of the voting process.

In another post-election report, Facebook found that more than 40% of top views or impressions in West Bengal, India, were “fake / non-genuine.” One fraudulent account has accumulated over 30 million impressions.

A report released in March showed that many of the issues cited during the 2019 elections continued.

In an internal document called the Adversarial Harmful Networks: India Case Study, Facebook researchers write that Facebook has groups and pages that are “full of inflamed and misleading anti-Islamic content.”

The report falsely claims that there are many inhuman posts comparing Muslims to “pigs” and “dogs” and that the Islamic scriptures, the Koran, require men to rape women’s families. Said that there was information.

Much of the material circulated around a Facebook group promoting India’s right-wing and nationalist group Rashtriya Swayamsevak Sangh, which is closely associated with India’s dominant Bharatiya Janata Party (BJP). These groups raise issues with the expanding Islamic minority near the West Bengal-Pakistan border, call for the expulsion of Islam from India, and publish a post on Facebook promoting Islamic population management laws. bottom. Facebook knows that such harmful posts are skyrocketing on the platform and will improve the “classifier”, an automated system that can detect and delete posts that contain violent and stimulating words. I needed to. Facebook also hesitated to designate RSS as a dangerous organization because of the “political sensitivity” that could affect the operation of domestic social networks.

Facebook says it trained artificial intelligence systems in five of the 22 officially recognized languages ​​in India. (Others say they have human reviewers.) However, in Hindi and Bengali, there is not yet enough data to properly crack down on content, and much of the content is aimed at Muslims. “It’s not flagged or taken action,” the Facebook report said. ..

Five months ago, Facebook was still struggling to effectively remove hate speech to Muslims. Another company reports a detailed effort by Bajrang Dal, a BJP-linked radical group, to publish posts containing anti-Islamic stories on the platform.

The document states that Facebook is considering designating the group as a dangerous organization because it “incites religious violence” on the platform. But I haven’t done so yet. In India, Facebook is working on an amplified version of the problem – Silicon Valley

Back to top button