Facebook’s cloudy days have just got darker with another whistle-blower converging on the surface.
Leaked documents exposed by whistle-blower Frances Haugen and accessed by The Associated Press detailed about Facebook’s continuous struggle with curbing hate speech, misinformation and inflammatory posts – particularly anti-Muslim content in its largest growth market, India. The documents also unearthed key details like the company’s employees doubting over its motivations and interests.
Whistleblower Frances Haugen
Frances Haugen worked with Facebook for two years as a Product Manager in the company’s civic integrity team. There, she took over the responsibilities like tracking the spread of misinformation on the platform and ensuring that the platform was not used to destabilize democracy. The team, however, was disbanded soon after the US presidential elections in 2020.
Unmasking the sloppiness of the social media giant
As per Frances Haugen, “There were conflicts of interest between what was good for the public and what was good for Facebook,” she said during the interview. “And Facebook over and over again chose to optimize for its own interests like making more money.” Later Frances also accused the platform of lying about the amount of progress it has made in combating hate speech online. Haugen went as far as claiming that Facebook was used to plan the Capitol riot on January 6, after the company chose to turn off safety systems following the US presidential elections.
As per a CNN report, “Haugen also filed around eight complaints with the US’ Securities and Exchange Commission, alleging that Facebook was not disclosing research about its shortcomings from investors and the public.”
She also leaked tens of thousands of internal company documents to the Wall Street Journal, which then published a series of reports that showed Facebook was aware of the negative effects of misinformation and the harm that it causes, particularly to teenage girls, but was doing little to stop it.
How Facebook failed India?
Many reports supported the statement on how Facebook’s algorithms recommended content that incited violence, and how it failed to stop misinformation among its biggest user base.
These reports were written by Facebook employees on their experiences of how the social media platform affects India. One report explained how a Facebook researcher created a new user account as a person living in Kerala. It followed groups, watched videos and checked out new pages all recommended by Facebook. “Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote in the report.
The reports also showed how Facebook was a mess during the 2019 general election in India. The platform was filled with bots and fake accounts related to the ruling party BJP and opposition “wreaking havoc”. Facebook had announced steps to counter misinformation and also partnered with fact-checkers in the run up to the general election. But it also created a “political white list” to exempt some politicians from fact-checking. It also found that there were bots and fake accounts that spread misinformation on the voting process.
The New York Times Report
In February 2019, a Facebook researcher created an account to look into what the social media website will look like for a person living in Kerala, according to the New York Times reported on Saturday.
“For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site. The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month,” the US newspaper said in its report.
“Internal documents show a struggle with misinformation, hate speech and celebrations of violence in the country, the company’s biggest market,” said the report based on disclosures obtained by a consortium of news organisations, including the New York Times and the Associated Press news agency.
The internal documents include details on how bots and fake accounts tied to the “country’s ruling party and opposition figures” were wreaking havoc on India’s national elections, the report said.
In a separate report produced after the 2019 national elections, Facebook found that “over 40 per cent of top views, or impressions, in the Indian state of West Bengal were fake/inauthentic”, the newspaper reported. One inauthentic account had amassed more than 30 million impressions.
In an internal document – titled Adversarial Harmful Networks: India Case Study – Facebook researchers wrote that there were groups and pages “replete with inflammatory and misleading anti-Muslim content” on the social media platform.