“On platforms where billions of people can have a voice, all the good, bad and ugly is on display. But that’s free expression.” Joel Kaplan, Meta’s Chief Global Affairs Officer.

This observation raises a critical question: Is the best way to protect freedom of speech to let it go unchallenged?

Freedom of expression is undeniably invaluable. Joel Kaplan notes that Zuckerberg himself argued it “has been the driving force behind progress in American society and around the world.” Yet, true progress requires not only the freedom to speak but also responsibility. In a functioning democracy, freedom of speech allows citizens to express discontent and report abuses without fear of retribution. 

However, this freedom must be exercised in ways that ensure all voices are heard, not drowned out by the loudest or most aggressive participants. This is especially important on social media platforms which can create echo chambers. Social media platforms allow users to easily find communities of other users sharing similar beliefs. While this can be highly beneficial in some contexts, echo chambers are “environments in which the opinion, political leaning, or belief of users about a topic gets reinforced due to repeated interactions with peers or sources having similar tendencies and attitudes.” (1) What this means is that discriminatory attitudes can end up being reinforced in such environments, making it a lot more difficult for marginalized groups to freely express their views.

This is where responsibility enters the equation. Without it, freedom of speech risks being weaponized—used to incite hatred, propagate misinformation, and even silence others.

The Responsibility of Social Media Platforms

The first layer of responsibility lies with those who regulate and manage social media platforms. Amnesty International offers a stark example:

“In 2018, Amnesty International published research that found that Twitter is a platform where violence and abuse against women flourish, often with little accountability. Instead of the platform being a place where women can express themselves freely and where their voices are strengthened, Twitter leads women to self-censor what they post and limit their interactions. As a company, Twitter is failing its responsibility to respect women’s rights online by inadequately investigating and responding to reports of violence and abuse in a transparent manner.”

This example shows that without mechanisms for accountability, platforms risk becoming breeding grounds for harm rather than hubs of productive discourse.

In response to this latest announcement by Meta, Robbie Lockie, founder of the Freedom Food alliance, says:

“This move is not a step toward empowering the public or expanding free speech. It’s a transparent attempt to offload the responsibility of managing misinformation onto the very users already overwhelmed by it. The World Economic Forum has identified the spread of misinformation as the top global short-term threat to humanity, surpassing extreme weather events. By abandoning independent fact-checkers, Meta is shirking its responsibility to mitigate this critical issue.”

The Context of Health and Nutrition Misinformation

If we focus specifically on health and nutrition misinformation, the removal of third-party fact-checking poses a direct threat to public health. Health professionals are bound by regulations when offering advice publicly. Their guidance is nuanced, but social media algorithms often amplify more extreme, sensational voices over these balanced perspectives.

Unfortunately, although the current operating system has not always been effective to combat nutrition misinformation, the removal of third-party fact-checking risks exacerbating this contrast even more, as fact-checkers often rely on experts in their field to make their ‘verdict’, and can also point the public to trusted professionals. 

The Linguistic Trap of Dichotomies

As a Cognitive Linguist, I’ve observed how social media amplifies dichotomies: “This or that,” “It’s not X, it’s Y.” These black-and-white formulas thrive online because they are simple, shareable, and provoke engagement. In the current debate about fact-checking, one dominant dichotomy has emerged: freedom of speech vs. censorship. Within this framing, fact-checking can easily fall in the latter category.

But reality is far more complex. What we need now more than ever is to broaden this debate and make room for nuance and middle ground. This conversation needs to expand beyond false binaries to explore a more nuanced understanding of how freedom of speech and fact-checking can coexist.

Who are Independent Fact-Checkers?

Misinformation can be posted in seconds, but verifying it takes time. Fact-checking organizations spend a lot longer assessing a claim than it takes to share one. This is because fact-checking isn’t just about labeling something as true, false or misleading; it’s about weighing evidence, considering context, and presenting findings transparently. Community Notes, while valuable in some contexts, may lack the rigor, expertise, and impartiality that professional fact-checking organizations bring to the table.

In the words of Meta, this is how its third-party fact-checking system operates: “Working with independent, IFCN-certified fact-checkers who identify, review and rate viral misinformation across Facebook, Instagram and WhatsApp.”

So who are ‘independent fact-checkers’? These aren’t random individuals behind screens making subjective judgments. Achieving IFCN accreditation is a rigorous process requiring adherence to a code of principles. At the heart of these principles is striving to remain impartial, not only in delivering fact-checks, but also in the selection of claims to fact-check.

While total objectivity may be impossible, these principles provide a foundation for trust, and they are essential so that fact-checkers don’t just become participants in debates.

By contrast, relying solely on community users to flag misinformation risks turning into a battle of “who’s the loudest voice,” rather than a pursuit of clarity.

According to Robbie Lockie, this move raises significant concerns about the future of accurate information on social media, and is likely to make navigating the world of online information even more confusing. He also questions the timing of the decision, “especially in light of Meta’s recent struggles with their AI-generated profiles spreading misinformation. Rather than addressing these challenges by strengthening their fact-checking infrastructure, Meta appears to be stepping back from accountability.”

A Balanced Path Forward

Accountability, transparency, and responsibility are not threats to freedom of speech. They are tools that enable it to flourish and to ensure that everyone’s voice can be heard. 

If we value freedom of speech, we must also value the structures that ensure it serves the public good. Fact-checking is one such structure—an imperfect, but in my opinion, essential one.