Free Speech and the Need for Content Moderation

Updated August 12, 2022

Download Paper

File format: .pdf, .doc, available for editing

Free Speech and the Need for Content Moderation essay

Get help to write your own 100% unique essay

Get custom paper

78 writers are online and ready to chat

This essay has been submitted to us by a student. This is not an example of the work written by our writers.

America prides itself on being the land of the free, a place where all people and opinions are accepted. It is the first thing that one will find in the Constitution and a core value that the country was built upon. Throughout most of our history, freedom of speech has remained uncontested. However, the rise of technology is redefining the term in the digital space. With its ease of access and availability, anyone can become an influencer and share posts for all of their followers to see and spread.

Unlike with corporate organizations and mass media productions, these individuals are not usually held to any honest or ethical standards. Additionally, on some platforms, individuals may not even need to put a real name or face to their posts and are able to hide behind their computer screens without taking ownership of the content they are spreading. This has led to online chaos: the public has become distrusting of everything they read; extremists and hate speech are being given platform; global events are taking a turn for the worst. This is not to say that social media has not had positive impacts in everyday lives, but it is simply to point out that the negative impacts have been significant, and that presents a substantial challenge to our understanding of free speech and what that entails in the changing landscape of online spaces.

Private institutions and corporations currently handle all their content moderation with little to no restrictions, giving them a lot of power and influence for future landmark events. This begs the question: Should the government have more regulation on these institutions? Does the system need to be changed? How accountable are these institutions for the events that they help precipitate? Throughout this paper, I will attempt to present a case for improved content moderation on social media, the challenges that are presented as part of this undertaking, and how we can reconcile such moderation with our right to freedom of expression and speech. Let us begin by taking a closer look at the impact social media has had on the changing political landscapes over the past few years.

With the growing wave of nationalism across the world, the election of many nationalist leaders in various countries, and the increasing volume of debate and misinformation surrounding politics that is now rampant on social media, the public is more vulnerable than ever, changing the course of history. The most prominent example of this being, the recent Russian interference in the 2016 US presidential election, and the resulting fallout from it. Many, including Professor Kathleen Hall Jamieson from University of Pennsylvania [1], believe that Trump would not be president had it not been for the help of Russian hackers.

Jamieson highlights the roles of the strategically timed WikiLeaks Hillary Clinton email leak that captured the media’s attention, burying news that might have been more favorable to Clinton and Russian social media targeting towards undecided voters. By stealing confidential voter data, hackers directed their attacks to on-the-fence voters by creating fake profiles that propagated false news stories which persuaded enough voters to either vote for Trump or not vote at all. Not only did such misinformation and “gaming” of social media change the outcome of the election, it has also intensified the divide between political parties and their voters. This divide did not just stop once the election was over.

The spread of false news, often circulated by our own president, has led to an increase in distrust of the media, inciting anger in some while resulting in defensiveness in others. Members of both parties continue to belittle each other and often surround themselves in echo chambers with “news” – be it real or fake – that favors their beliefs. In recent history, there has never been such a polarizing political climate which has resulted in increased prejudices, hate speech, and even extremist actions. All of this is now a constant part of our lives, as social media is flooded by articles, opinions and news, that are often very tough to assess as false, leading to more distrust, anger and polarization among the general public.

This entire set of events begs to question – when does free speech become so detrimental to the interests of the people that some level of censorship or moderation might become necessary. Would the election have gone a different way if incredibly polarizing viewpoints and echo chambers were removed from social media? Does “fake news” and misinformation or misleading opinions not still fall under our current definition of free speech? Is it in fact right to curb free speech on social media platforms? Would that even be any different from attempting to influence the elections or public opinion on certain matter in a direction that’s favorable to the person controlling the medium or platform?

And obviously, how would one even go about with such content moderation without letting personal biases leak into the moderation? Looking beyond the US presidential election and the subsequent midterm elections, we can see social media’s impact in a myriad of political events in other countries too. It was recently confirmed that the advertising on the social media platform, Facebook, by the “Vote Leave” campaign broke British laws regarding election campaign spending limits – influencing the outcome of the Brexit vote [2].

On the other side, governments are also guilty of using similar tactics and censorship on social media to sway their own population and ignore pressing issues. An extreme example of this being the Chinese government who, after deadly riots between the Uighur minorities and Chinese police, flooded social media with posts lauding China’s economic development in attempts to distract their public [3]. These tactics are routinely practiced by China in order to present themselves as a flawless regime while ignoring major civil rights violations and refusing to acknowledge the discrimination of minority communities.

Even though these events occurred outside of the US and did not directly impact US residents, the lessons they teach are relevant. In fact, on the surface they might not even directly present a strong case for content moderation – since spending aside, all of the messaging in these cases was perfectly legal, and for the most part, depending on who you ask, reasonable. And that is why it is so important to truly establish a stronger framework for content moderation on social media, one that is actually able to navigate such complicated situations, where there will never be just one correct answer.

In fact, these cases also show us that legislation and government oversight on content moderation might also create a slippery slope that more authoritarian regimes could misuse incredibly easily to their advantage. Moving past just governments and elections, let us consider some cases that are perhaps a little closer to home for some of us. Along with the increasing political rift, we are seeing an insurgence of hate crimes, hate groups and general extremism. Groups that are specifically targeting people based on race, income, immigration status, etc. are on the rise. These groups have found social media to be the perfect tool for recruitment and outreach, successfully spreading their harmful ideas and unfortunately a lot of hatred and anger too.

We’re seeing the rise of movements such as white nationalism, Neo-Nazism, strongly anti-immigrant groups, and dozens of other ideologies that should have no space in our modern society. Yet, with Facebook, Twitter, Reddit, and 4chan, these ideologies have found a home, easy access to people and an almost foolproof way to sow the seeds of hatred in the people’s minds. However, the effect of this is probably not seen anywhere as severely as in Myanmar. Social media has been very easily turned into a tool for ethnic cleansing in Myanmar. With a spread of misinformation that is being used to turn Buddhists against Muslims, we are seeing one of the worst atrocities of the 21st century, and there is nothing being done to stop it.

Unlike many of the cases before, the answer here seems to be simpler – content moderation is in fact important and can work wonders in such a scenario. However, we are presented with an entirely new problem. The content will be moderated by regular people – people who have their own biases and beliefs. How do we ensure uniform standards of content moderation when we can’t have any way of ensuring uniformity in the beliefs and judgement processes of the reviewers? Now that we have discussed the impact of free speech and moderation in social media on politics and larger groups of people, we move to a more personal level – the impact of unmoderated content on private citizens.

As we mentioned earlier, the spread of information has resulted in massive public distrust of government processes and law enforcement in the public. This, in addition to successful social media movements, has also caused an uprise in many vigilante groups who try to take the matters of justice into their own hands. They are not necessarily prejudiced towards an entire group of people, and instead they tend to focus on individuals or small groups that they believe are “negative elements” of society. It is clear that social media is a powerful tool, one that unfortunately far too often tends to become a way to provide vigilante justice or some sort of unreasonable punishment due to misinformation. Individuals set out with good intentions of helping the greater the good and stopping future crime and violence.

However, this often fails, and due to the lack of restrictions and regulations on media platforms, the consequences end up being dire, specifically involving doxxing and witch hunts. Take for example the Boston Marathon bombing. People around the world were rightfully devastated and furious, which in turn motivated many individuals to take action. On the social media forum, Reddit, a subreddit was created by some of their users to try and identify the bomber. According to Google Analytics, this tragic event caused a huge surge in Reddit traffic, peaking at about 272, 000 users.

On these discussion forums, Redditors – as they are known – researched, collaborated, and discussed possible suspects before settling on a name, Sunil Tripathi. Sunil Tripathi was a 22 year old American student who had gone missing a month before the bombings. This identification, although unsubstantiated, gained a lot of attention, including his name being trending on Twitter and news articles. Tripathi’s family was harassed with phone calls, threats, media reporters, and etc. As we know now, Sunil was misidentified and innocent. A few days later, Sunil’s body was found dead, floating in the river.

Due to this witch hunt, his family went through intense, unnecessary trauma and hate, and feared for their safety. What they went through is unacceptable on all levels, and has led to discussions of whether “crowd-sourced investigations” should be avoided in the future. In this case, free speech led to dangerous collaboration and speculation, harming an innocent family. Should social media platforms work to stop these kinds of dangerous activities before they occur? Improved content moderation could have possibly stopped such an incident from happening in the first place.

However, such instances of the internet banding together has also often led to successful identification and conviction of criminals. How can moderators best determine when it’s appropriate to step in? After all, a lot of the harassment is not done through the original platform in most cases, but also stepping in too late might mean that sensitive private information was made public in these “witch hunts”. On the other hand, moving from forums to private messaging services, India’s WhatsApp killings show more to the dark side of social media inspired vigilantes.

Again, this started with good intentions – raising awareness for child kidnappings and helping parents and adults be more alert. Rumors targeting outside visitors and certain behaviors that would otherwise be considered normal, began circulating on a messaging service called WhatsApp. Videos of supposed “child kidnappings” were being taken out of context and forwarded to friends and other users, inciting rage in the community. Vigilantes decided they needed to take control of the situation and began carrying out mob attacks to travellers and community members they deemed as suspicious. Mob justice is not uncommon in India and had devastating results including the lynching and killings of several innocent people and other severe injuries.

Law enforcement says that there were no truth to the rumors and all the victims have been innocent. These outcomes are truly tragic and unacceptable. WhatsApp may not be accountable for how people use their service, but they are responsible for enabling anonymous sending forwards, says digital rights activist, Nikhil Pahwa. At the government’s insistence for WhatsApp to take action, the messaging service introduced a new feature to flag messages that have been forwarded from others as well as a plan to educate their users on how to spot fake news and rumours.

While this is a good start, is it enough? Or has WhatsApp simply tried to rid itself of any responsibility for these incidents with a few, small initiatives? And on the other hand, if WhatsApp were to take a stronger stand with moderating content, how would that play out with the fact that WhatsApp is supposed to be a private messaging service? How can content moderation navigate the increasingly complicated landscape of social media and messaging platforms, where privacy is also becoming an ever increasing concern? No discussion of online personal attacks can be complete without the mention of cyberbullying. Not only does cyberbullying include mean, hateful comments and messages, but it can also include cyber stalking and impersonation.

With the rise of social media use, especially in young children and teens, cyberbullying continues to rise as well. Over 40% of children say that they have been bullied at least once, and 21% of teens reported that they only check social media to see if any hurtful comments are being written about them. These comments affect many children’s academics and personal life negatively. In a 2010 report, half of UK’s suicides of 10-14 year olds were blamed on bullying. Meanwhile, their bullies, get away with it almost scotch-free and get to hide behind a computer screen, making them even more impassive to the effects of their hurtful words. It is widely agreed upon that bullying online is easier to execute and get away consequence-free than bullying in person.

Even as social media platforms try to make an effort to curtail mean comments and raise awareness to the issue, the numbers continue to rise. It is not enough. Having discussed why content moderation requires improvement and the challenges that are posed in the social media world, it is now time to discuss how to balance this moderation with our freedom of speech. The biggest legislation related debate in this realm is – should social media platforms still be considered platforms or have they extended their powers too much and become publishers? For years, major platforms have been hiding behind Section 230 of the Communications Decency Act which protects online platforms from their users’ hateful, unlawful content and postings.

This section was created by Congress in hopes to facilitate “forums for true diversity of political discourse”. However, many officials and public individuals feel that social media platforms have abused this power, and that this power was only to be given under the assumption that they would remain impartial with open communication instead of curators. Even Mark Zuckerberg has given conflicting responses as to what Facebook is, once citing in August 2016 that his company was a “tech company not a media company”, insinuating that they are a platform.

Later that year, he stated that Facebook was not a “traditional technology company”, further suggesting that the company might in many ways be like a publisher too. Recently, Facebook went even as far as to call themselves a publisher by arguing that its decision about “what not to publish” should be protected because it is a “publisher”. Publishers are not protected under Section 230 and are considered responsible for all content on their site. It is clear that technology has blurred the lines of platform vs. publisher.

However, legislation has yet to catch up with that. And until it does, technology companies and social media platforms will be able to constantly find loopholes, flip-flopping between the two categories, trying to get the best of both worlds. We need stronger legislation so that these companies are motivated to take more responsibility for the content that they allow. Through this paper we have discussed the variety of scenarios that content moderation could have made a positive impact on people. But with all of that positive impact, we have also discussed what problems we will face as part of making good content moderation the norm. There are a variety of concerns that accompany this problem, and there is certainly no easy solution.

However, without an initial impetus, it is seeming incredibly unlikely that technology companies will take even the first big step towards a proper comprehensive content moderation policy which will ensure a better and safer internet for all of us. Instead we will be stuck with minor improvements that these companies make in the face of any major PR crisis in order to appease customers. Truly effective moderation that can adequately balance free speech and user safety is a long way away, however I truly believe that we should shoot for good progressive improvements instead of immediate perfection, and hopefully in light of the growing demand for such changes, technology companies will finally take notice too.


  1. ​Jamieson, Kathleen Hall. ​Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don’t, Can’t, and Do Know​. Oxford University Press, 2018.
  2. Lomas, Natasha. “It’s Official: Brexit Campaign Broke the Law – with Social Media’s Help.” ​TechCrunch​, TechCrunch, 17 July 2018, techcrunch.com/2018/07/17/its-official-brexit-campaign-broke-the-law-with-social-medias-help/.
  3. Ma, Alexandra. “Planting Spies, Paying People to Post on Social Media, and Pretending the News Doesn’t Exist: This Is How China Tries to Distract People from Human Rights Abuses.” Business Insider​, Business Insider, 9 June 2018, www.businessinsider.com/how-china-Propaganda-department-glosses-over-human-rights-2018-6.
  4. Solon, Olivia. “Facebook Struggling to End Hate Speech in Myanmar, Investigation Finds.” The Guardian​, Guardian News and Media, 16 Aug. 2018, w​ww.theguardian.com/technology/2018​/aug/15/facebook-myanmar-rohingya-hate-speech-investigation.
  5. “Reddit Apologises for Online Boston ‘Witch Hunt’.” ​BBC News​, BBC, 23 Apr. 2013, www.bbc.com/news/technology-22263020.
  6. Elliott, Josh K. “India WhatsApp Killings: Why Mobs Are Lynching Outsiders over Fake Videos.” ​Global News​, Global News, 16 July 2018, globalnews.ca/news/4333499/india-whatsapp-lynchings-child-kidnappers-fake-news/.
  7. Salim, Saima. “Infographic: Cyber Bullying on the Rise as the Use of Social Media Increases.” Digital Information World​, 26 Nov. 2018, www.digitalinformationworld.com/2018/11/infographic-teen-cyberbullying-and-social-media-use-on-the-rise.html​.
  8. “’Bullying’ Link to Child Suicide Rate, Charity Suggests.” ​BBC News​, BBC, 13 June 2010, www.bbc.com/news/10302550.
  9. Candeub, Adam, et al. “Platform, or Publisher?” ​City Journal​, 7 May 2018, www.city-journal.org/html/platform-or-publisher-15888.html.
  10. Brown, Thomas. “Social Media and Online Platforms as Publishers Debate on 11 January 2018.” ​House of Lords​, House of Lords, 8 Jan. 2018.
  11. Levin, Sam. “Is Facebook a Publisher? In Public It Says No, but in Court It Says Yes.” ​The Guardian​, Guardian News and Media, 3 July 2018, www.theguardian.com/technology/2018/jul/02/facebook-mark-zuckerberg-platform-publisher-lawsuit.
Free Speech and the Need for Content Moderation essay

Remember. This is just a sample

You can get your custom paper from our expert writers

Get custom paper

Free Speech and the Need for Content Moderation. (2021, Nov 11). Retrieved from https://samploon.com/free-speech-and-the-need-for-content-moderation-in-social-media/


How does content moderation work on social media?
Content moderation is the process of reviewing, screening, and filtering social media content related to your business . This ensures that the content is not just appropriate — but also aligned with your branding and helps you achieve your overall business goals.
What is the impact of content moderation?
An effective content moderation process will maintain the sanity of the platform, ensure a quality customer experience for other users, and also leverage the UGC to assist other customers in making informed decisions .
Why content moderation is important in social media?
By social media moderation, a brand can filter out inappropriate content that can ruin the brand image. Content moderation is essential for companies to engage customers and create a safer platform for sharing their views .
Why is content moderation necessary?
Content moderation protects your brand — and your users . Having a team of content moderators on hand reduces the risk of visitors seeing content they may consider upsetting or offensive. Content moderation also prevents bullies or trolls from taking advantage of your brand online.
We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

I'm Peter!

Would you like to get a custom essay? How about receiving a customized one?

Check it out