So far the Facebook Papers have led to dozens of stories about how the company knew it was failing to remove hate speech, misinformation, and calls to violence in languages across the globe. As much as this focus on Facebook’s global harm is vital, we shouldn’t overlook the role that the social media language gap plays in harming communities within the United States.
On a recent episode of Last Week Tonight, John Oliver discussed online platforms’ failure to curb the spread of non-English misinformation. While companies like Facebook and YouTube have made a few inroads to address the problem in English, they’ve allowed misinformation to spread unchecked in other languages—with disastrous results. In the lead-up to the 2020 election, disinformation campaigns targeted marginalized communities to suppress voter turnout. And during the pandemic, cruel disinformers have blanketed the Latinx community with blatant falsehoods about the COVID-19 vaccine. The community already makes up a higher percentage of the essential workforce and Latinx people are four times more likely to be hospitalized with COVID-19 than the general population.
Oliver’s report of how the targeting of misinformation at diaspora communities in the United States is “exacerbated by the fact that there aren’t alternative sources of news” for these communities in their own languages isn’t a new revelation. This vital gap is one that our organizations, Viet Fact Check and Free Press, have long been fighting to fill. Those efforts include pushing social media platforms to crack down on non-English misinformation: Viet Fact Check has drawn attention to YouTube’s indifference to Vietnamese-language misinformation, and Free Press—along with the National Hispanic Media Coalition and the Center for American Progress—has urged Facebook to remedy the way the spread of Spanish-language conspiracy theories and other lies are fueling hate and discrimination.
We’ve examined how election and health misinformation have harmed our respective communities in the United States. The results confirm a clear pattern of neglect; while the platforms still have a ways to go in enforcing their own policies in English, it is far worse in non-English languages. Even though YouTube banned InfoWars, it ignored the Vietnamese-American version of Alex Jones for months—the company only took action after John Oliver’s segment aired. And despite our efforts to directly flag Spanish-language posts with explicit calls to violence, Facebook’s moderators relied on a shoddy translation to English to justify their inaction. To put it simply, these companies are not doing nearly enough to keep our people safe.
Facebook and YouTube roadblocks
Public pressure and awareness of this issue are critical to finding a path forward, but they’re not enough. Our efforts to engage directly with the platforms have been frustrated at every turn—both YouTube and Facebook have failed to be transparent about the full extent of the problem. Facebook is also systematically shutting down access to academics and researchers studying the way misinformation spreads across the platform.
We’ve run into roadblocks when speaking with staff at the two companies. No one has acknowledged whether anyone is in charge of moderating non-English content within the United States. In our interactions, the companies tried to portray non-English misinformation solely as an international issue and therefore none of our concern. Meetings that we pursued for months turned into basic presentations that did little to address whether YouTube or Facebook have built any systems to protect people from misinformation in languages other than English. We kept asking questions, but it was clear the companies were stalling and we wouldn’t get any straight answers.
As misinformation escalates about crucial matters like COVID-19 vaccines, a report from the Institute for Strategic Dialogue identified major gaps in Facebook’s fact-checking program when it comes to non-English languages. The report found that a higher number of fact-checkers are dedicated to English, leaving the same viral content to spread in other languages. The platforms have refused to share any details about what they’re doing to limit the spread of toxic content in other languages. Facebook’s and YouTube’s responses to a series of letters sent by Sen. Ben Ray Luján, Sen. Amy Klobuchar, and dozens of other members of Congress were evasive, incomplete, and just plain disrespectful.
Most recently, Facebook whistleblower Frances Haugen provided documents detailing why the safety of our communities is not a priority, testifying before Congress about the company’s profits-before-people approach: “It seems that Facebook invests more in the users that make more money, even though the danger may not be evenly distributed based on profitability.” The disparity in moderation practices across languages reflects Facebook’s tunnel vision when it comes to prioritizing growth and profits. And this isn’t the first time Facebook’s failures and unwillingness to protect its users has come to light. Months before Haugen came forward, Sophie Zhang, a Facebook data scientist, spoke publicly about her work combating fake accounts and political unrest in other parts of the globe, while leadership at Facebook looked the other way simply because there was little risk of a public-relations blowback. The Facebook Papers are further confirmation of the company’s inability to prevent hate and misinformation in other parts of the globe.
In other words, Facebook spends little time and effort protecting users who don’t directly contribute to the company’s profits or to negative press coverage in the United States.
Keeping all communities safe
In the face of mounting evidence, it’s clear these companies have no interest in solving this problem on their own. Solutions to misinformation require that platforms like Facebook and YouTube reject business models that are designed to profit from attention, regardless of how users and their wider communities are affected.
The way misinformation has been allowed to spread on social media is a perfect storm of willful neglect, social engineering, and prioritization of profit. The platforms constantly collect our personal and demographic data to hyper-personalize our news feeds and video recommendations. Misinformation is created to appeal to the anxieties and vulnerabilities of specific groups with stunning accuracy to drive more clicks, more comments, and therefore more views. This engagement in turn feeds the algorithms designed solely to spread content regardless of whether or not it’s truthful. As we’ve seen in recent days, the lack of oversight and even the most basic investment in non-English content creates a vulnerability that allows bad actors to profit—often flouting the rules the platforms claim to enforce for everyone.
To fully understand the cost of this disinformation on a democratic and open society, we need more clarity on how these algorithms determine what we see. Right now, Facebook and YouTube don’t train their algorithms to tell the difference between a truth and a lie. Every click is an amplifier of content that will keep us more engaged. And when clicks and engagement translate directly to dollars, the problem is greater than people posting lies online. The system is built for disinformers—and if their content is compelling enough, it can quickly reach millions. This engagement turns into ad dollars for the platforms and accelerates audience engagement. Our communities suffer because lies create profits.
So what’s next? If the platforms want to operate on a global scale, then language shouldn’t be a barrier to keeping communities safe. Congress and the Federal Trade Commission must work together to adopt a privacy framework that protects the civil rights of people living within our multilingual and diverse democracy.
Platforms must also produce regular transparency reports and allow access to independent researchers seeking to understand the depth and breadth of the harms caused by these companies’ engagement-driven business model. Legislation addressing some of these issues already exists in Sen. Ed Markey and Rep. Doris Matsui’s Algorithmic Justice and Online Platform Transparency Act.
Now more than ever, it should be obvious that language discrimination hurts all people in the United States. The health and safety of our communities is not something that should get lost in translation.