The advent of social media has had a profound effect on how we interact as a society. People all over the world are able to share their ideas and make connections. They also have the ability to spread harmful rhetoric and hurt other people, be it through dispensing personal information (colloquially known as doxxing) or providing a platform to engage in illegal activity. The process of reigning in social media through legislation has been slowly chugging along since the beginning of the new millenia but there are still many questions that remain unanswered. One of these is whether or not the companies that own these sites should be held responsible for how users utilize their service. Massive sites such as Facebook, YouTube, and Twitter have had extremely public trouble with controlling how their websites are used. Some lost prominent sponsors due to a lack of faith in their abilities to get rid of fake news, offensive or inappropriate content, and pockets of extremist users. Some would employ the line of logic that if a hardware store sold an axe that ended up being used to chop up some innocent people, the hardware store is not liable for murder. A sort of “guns don’t kill people, people kill people” argument. Others feel that the profits that these corporations gain from people using their site for these purposes should qualify them for punishment. Should all of the blame be put on the users?
According to the American Library Association, hate speech is defined as “any form of expression through which speakers intend to vilify, humiliate, or incite hatred against a group or a class of persons.” It is not strictly an actionable offense within America unless it is coupled with directed and/or persistent action, though the same is not the case in countries like Iceland which prosecute hate speech with fines and even jail time (Freedom). It has been a hot button issue in recent years. Governments debate over whether it is an actionable offense and social media platforms have made attempts to prevent it from appearing with varying degrees of seriousness.
There is no firm international consensus on how cases of online hate speech, radicalization, and general criminal activity should be handled. Countries such as the United States have legal protections for social media sites that keep them from being prosecuted, while countries such as Germany and more recently the United Kingdom have been introducing provisions that punish sites for allowing hate speech to remain circulating. This disparity allows for the different approaches to be examined and their ramifications compared. That said, many of these initiatives are in their infancy so the long-term effects are still unknown. The aim of this paper is to explore these differences and consider if social media companies should be held accountable for what people post.
Many feel that restricting speech online should fall to the individual user, whether that be to call out people who are offensive or to say what they please without consequence. One such voice is Bruce Thornton, who wrote an article regarding censorship online. He asserts that the suppression of free speech on social media, through measures meant to target hate speech, limits the individual freedom of the site’s users. He says that the measures are too often used against historical oppressors to benefit minority groups and do not attack the problem fairly. The definition of “hate speech” is also left too nebulous to be litigated effectively. It is his opinion that social media should be left unfiltered so that it can accurately reflect the thoughts of the population (Thornton). This view is echoed by many on the conservative side of social politics who fear that the pressure from the public leads sites to purge their content more than that of more “acceptable” liberal views. In their mind, there is no problem to be fixed and social media companies are guilty of being too restrictive rather than negligent. There is also evidence that action does not prevent the spread of unpleasant ideals. In an article for The New York Times, it was explained that social media sites have been cracking down on hate speech and harassment by banning people like Alex Jones, inflammatory host of Info Wars, but this has done little to dim support for their ideas. Groups still exist that echo his sentiments regarding false flag attacks and government conspiracies, so there is doubt about whether government intervention would really do anything to stop it (Roose). Some see social media’s ability to moderate content as something that means it should either be held to journalistic standards or not allowed to impose such restrictions. “Instead of trusted, accountable publishers determining what constitutes real news, web and social media companies that have no regulatory oversight now make those decisions for us. These actions by big tech companies affect citizens’ fundamental rights and civil dialogue well beyond the confines of the online platforms, with little to no option for citizens to protect their right to free speech,” says Johannes Munter in an article with the News Media Alliance. While a regulatory oversight could be found in the government, the impact on individual rights would remain the same.
In a free market society, it could also be considered the job of social media sites’ sponsors to influence the direction the company takes rather than the government. In recent weeks we have seen attempts by sites like Tumblr and YouTube to gain back credibility and funding. After its app was pulled off the app store, Tumblr introduced new guidelines that banned inappropriate or “nsfw” (not safe for work) content. Even before the new rules were implemented Apple returned Tumblr to the app store. Thus, interference from a sponsor effectively changed a site without the need for government assistance (Liao). YouTube also introduced new methods of “flagging” or reporting content following several high profile incidents, including their most popular personality being accused of being a neo-Nazi, a rampant trend of inappropriate content aimed at children being discovered, and ads running on videos of disasters and shootings, and several sponsors pulled their advertisements and revenue from the site (vlogbrothers). The system has been somewhat effective in ensuring that ads do not appear on videos that are antithetical to the brand’s ideals and there has been a greater effort to crack down on inappropriate or hurtful content. Some believe that introducing government regulations would also raise the barrier of entry into the industry to an unfair level. USA Today writer Amelia Irvine claims “Silicon Valley investors would be more reluctant to invest in a company that could only innovate within the barriers dictated by government policy. Any startup in the field would need to hire expensive lawyers to navigate complex administrative rules and regulations.” She goes on to assert that it is the duty of the people to police content rather than the government, which could infringe on individual liberties. Facebook also went out of its way to pull ads from any account that showed a pattern of posting false information following the revelation that a multitude of Russian accounts were using the site to spread misinformation about the 2016 US Presidential Election and the ensuing public backlash (Leonnig et al.). Once again this shows how the free market operates in such a way that it punishes unacceptable behavior before the government has to.
Another issue with such oversight is that in countries like the US, the government has no business calling on individual companies to act as extensions of the government. During the Congressional hearings regarding “online political mischief by foreign governments” research fellow Diane Katz identified two key problems with the approach.
1. “Facebook, Twitter, and Google are private enterprises, not agents of government for Congress to kick around. Responsibility for protecting the nation from foreign threats belongs first with national security and intelligence agencies, and those elected to oversee them.”
2. “Whether Russia’s high jinks affected the 2016 election is a matter of dispute. And while some members of Congress are concerned that the online fakery has eroded public trust, they have a responsibility to determine whether there was actual wrongdoing on the part of tech companies rather than simply engage in scapegoating.”
Merely pointing a finger at enabling platforms rather than addressing the root of the problem (that is, the illegal or harmful activities carried out on such sites) makes government oversight look more like a distraction than a helpful advancement in policy. Should companies have to pay the price just because congresspeople can’t be bothered to look into potential international issues?
However, there are flaws in this viewpoint. Both actions taken by Tumblr and YouTube have met with considerable backlash from users who feel that they are all being punished for the breaches of others and as it is explained in The New York Times article, hateful ideologies are still out there. Though the Internet provides a presumed right to free speech, that doesn’t make harassing others online ok. To comment on the credibility of Thornton, the only opinion article cited, the author is a college professor and a research fellow at established universities, giving him good credibility. However, he tends to lean on the modern “liberal snowflake” sentiment more than is ideal. His political sway is obvious in the text but it does not negate the point of his argument, which is that conservative organizations are often more likely to be policed than liberal ones on social media. It has a significant lack of statistics to back up claims. The point is that while there is validity to claims that government intervention is unnecessary, they need to be taken with a grain of salt and the full situation in consideration.
The statements from Amelia Irvine are interesting because she views the issue through a solely American lens. Assuming a government is more committed to free market ideals than they are to ensuring that social media stops being used for nefarious purposes ignores countries with reversed priorities. The idea that businesses are more worthy of protection than public safety is potentially dangerous. That said, letting a government get involved with one aspect of private life could potentially lead to further chipping away at individual liberties. That is a prospect which can unilaterally be deemed unpleasant.
Conversely, many hold the opinion that there should be legal penalties against sites that host objectionable content. Government officials in the United Kingdom have promised legislation that fines companies if they do not remove posts that include hate speech or terroristic threats (BBC). A similar measure was passed into law in Germany, under which companies like Facebook could face fines of up to fifty million euros for neglecting to delete hate speech within a twenty-four hour period (Toor). Social media can be used to harass and cause harm to individuals by way of threats or actions coordinated through the use of the platform. Proponents of these policies feel that the corporations who run such sites should be held liable for the content they host and how the use of it impacts others.
Because social media is a fairly new phenomenon there is a gap in legal precedent for how cases are dealt with. The actual laws and legal precedent regarding liability and social media are fairly nebulous, but there are some things that have been established. Copyrighted material can be posted as long as there is a way for the owner of the copyright to take down what was posted. A law called Section 230 shields the company from liability if something offensive is posted, but not things involving illegal activity. Users have much fewer immunities than companies (Understanding). Those supporting government limits on social media companies’ actions cite a recent ruling in an American legal case that could provide sound reasoning for government intervention. The case in question involved several families who had loved ones killed in a terrorist attack that was planned on a social media site. They successfully sued the site by showing that the terrorist’s use of the site allowed the company to profit through ad sales, etc. Aiding and abetting terrorism is illegal and thus the plaintiffs are owed compensation (Brown). Companies gain profit from how people use their sites, so they are able to earn money off of criminal behavior and hate speech. Facebook reportedly made $100,000 in ad sales from a Russian “troll farm” that produced content meant to influence the US election (Leonnig et al.). Proponents claim that without intervention these sites will never enforce rules about certain types of speech because it would cost them money.
The argument from the opposing side that the Internet should be a place of unhindered free speech is countered with the assertion that the Internet should be a place where every law abiding citizen should be safe. Hate speech violates this ideal by making people feel threatened for no reason other than who they are. There is no easy international system to readily apply to what happens on the Internet so existing systems have been petitioned to implement rules so that people are safe.
Furthermore, they argue that penalizing companies for posts as well as the people who make them adds greater incentive to change. In order to target a systemic problem, a solution must be devised that fixes the overall system. Without proper motivation, the problems that exist in social media will only continue to worsen. The previously mentioned German policy influenced Facebook to hire an additional three thousand employees to find and flag objectionable posts (Toor). Prior to that, efforts to improve on problems with fake news, etc. had been minimal. Those in favor of these government steps feel that it is the only way to prevent the misuse of a popular method of communication. Facebook has been used in several questionable and horrifying ways, from providing a platform for organization of ethnic cleansing in Myanmar to the sale of a child bride from South Sudan (Leetaru). In an opinion article in Forbes, contributor Kalev Leetaru says that Facebook’s reticence on how many Burmese speakers they have moderating content and delay in flagging German content until legislation was passed speaks to the need for governments to intervene. The difference between the culpability of social media companies and other facilitators of communication like phone companies are that most social sites have community guidelines. By attempting to moderate content in the first place they set an expectation for themselves that the sites will be safe. Leetaru says that “until governments step forward and force the companies to take some level of responsibility, the companies and their leaders will likely continue to stand their ground and ‘delay, deny and deflect.’”
The validity of such assertions is varied. The American court case detailed in Should Social Networks Be Held Liable for Terrorism appears to show a legally sound argument for why social media companies have a degree of culpability in how people use their sites, because they stand to profit from illegal activities. Germany’s new policy is still in a stage where its far-reaching results remain to be seen. It is apparent that such moves are an effective way to manipulate companies like Google and Facebook because they have large bases of operation in Germany and moving away would also cost them revenue. Whether this legislation actually deters hate speech, rather than censoring it, is something to look out for in the future. Sources regarding what the current policy for social media is are generally credible, either with no author listed and operating in a strictly informational capacity or with a slightly liberal skew on an ongoing story. This perspective is mostly held by those on the American liberal side and the European conservative side. (For context, while each group largely disagrees on social issues they both want more government involvement in everyday life. Americans of this belief are called liberals because the country was started on a foundation of less government involvement so preserving that status quo is considered conservative.) Leetaru’s arguments in the Forbes article make a convincing point. He backs up his assertions with references to past examples where companies have done nothing without government intervention. If the state of attempts at moderation is really as bleak as he claims, the necessity for legislation is apparent. Such options may seem like a last ditch solution but when all that can be done within the current legal boundaries has been done and nothing has changed for the better the boundaries themselves must change.
Independent investigation was conducted by posting the questions “How responsible are social media companies for their users’ behaviors?” and “Do you think governments getting involved is a good idea or a slippery slope?” with explanations of the context on various social media sites. This yielded a few points of interest. First and foremost, people are less likely to respond to random political questions than they are to vacation photos and memes. There were a grand total of two responses, one from a classmate via Instagram and another from an Internet acquaintance on Tumblr. The classmate said that they believe the guidelines on social media sites should be enough to reign people in and that allowing them to censor people was preferable to allowing the government to do it. That said, they also think that community guidelines on such sites should be stricter because looser rules make bad behavior easier. The Internet acquaintance talked about how in Canada there are laws restricting free speech to “that which does not incite violence and harm” and how they believe reasonable censorship that advances social good is a practical tool for a government to utilize. Ideally, there would be more responses to compare and contrast but this is what was sent. These responses depict how the issue is anything but one-sided. It’s more like a dodecahedron. Different aspects of the debate affect different people. Americans would rather avoid censorship due to the First Amendment to the US Constitution. Those in Canada value the public good over certain personal freedoms. Young people tend to be more attuned to what is going on with social media. Broader conclusions about public consensus cannot be drawn from this sample, but it does speak to the inherent diversity of opinions.
In conclusion, the Internet is a nebulous thing. Most governments could never have anticipated something with such a far reaching effect on global society when they first set out their basic plan for governing. It is comparable to the American Old West; present but unpoliced and left to its own devices. Nevertheless, it and by extension social media invade most facets of life in developed countries. So what part should governments play in this new frontier of communication? The central questions around the issue have plagued society for years: How involved should the government be in the lives of its people? Will a free economy adequately deal with problems in businesses? And does free speech have limits? It is the view of the author that the approach taken with instances of slander and/or libel should be applied to posts on social media as well as laws that effectively prevent social media companies from profiting from criminal activity that takes place on their platforms. Ideally, there could be a UN Convention that puts forth basic international guidelines for how governments and the Internet should interact, that acts not so much as an edict but more of a general benchmark that countries can take or leave. There are few specifics that I would feel confident expounding upon because this area of law is uncharted territory. We have never had anything like social media in the whole span of history. There are less than fifteen years worth of data and literature available on the subject and the majority of that is now outdated. From what I have encountered while reviewing interviews and articles, those who are making the decisions around this have no idea what to do either. It’s a paradox wrapped up in politicization and garnished with a bit of outrage no matter what decision is made. How can I to reduce it down to bite-sized pieces, when the issue at hand is massive and convoluted? Implementing policy is difficult enough as is. Implementing Internet policy appears to me to be a looming Herculean task which no one wants to confront head on. But the future is already here, my friends.
November 2 – Read Speaking freely: lose free speech, and lose our political freedom too and To the fringe, being barred from Facebook isn’t the end Through online database Gale pages. Both offer opposing views on how to deal with the subject.
November 7 — Read Understanding the legal issues for social networking sites and their users on FindLaw.com for legal info regarding social media.
November 11 – Read The key moments from Mark Zuckerberg’s testimony to Congress on The Guardian and Should social networks be held liable for terrorism on Slate. They covered Zuckerberg’s opinion on the fate of Facebook and a legal theory in America that involves suing sites that had terrorist users that used it to plan an attack.
- Brown, Nina I. “Should Social Networks be Held Liable for Terrorism.” Slate, 2017, https://slate.com/technology/2017/06/a-new-legal-theory-for-holding-social-networks-liable-for-terrorism.html
- “Freedom of the press; Iceland.” Freedom House, 2017, https://freedomhouse.org/report/freedom-press/2012/iceland
- “Hate speech and hate crime.” American Library Association, 2017, http://www.ala.org/advocacy/intfreedom/hate.
- Irvine, Amelia. “Don’t regulate social media companies– even if they let Holocaust deniers speak.” USA Today, July 19, 2018, https://www.usatoday.com/story/opinion/2018/07/19/dont-regulate-social-media-despite-bias-facebook-twitter-youtube-column/796471002/
- Leetaru, Kalev. “Should Social Media Be Held Responsible for the Deaths and Atrocities It Facilitates?” Forbes, November 23, 2018, https://www.forbes.com/sites/kalevleetaru/2018/11/23/should-social-media-be-held-responsible-for-the-atrocities-and-deaths-it-facilitates/#7dc7982a38ec
- Leonnig et al., “Russian firm tied to pro-Kremlin propaganda advertised on Facebook during election.” The Washington Post, September 6, 2017, https://www.washingtonpost.com/politics/facebook-says-it-sold-political-ads-to-russian-company-during-2016-election/2017/09/06/32f01fd2-931e-11e7-89fa-bb822a46da5b_story.html?noredirect=on&utm_term=.9e3e0dabae21
- Liao, Shannon. “Tumblr Will Ban All Adult Content on December 17th.” The Verge, 2018, https://www.theverge.com/2018/12/3/18123752/tumblr-adult-content-porn-ban-date-explicit-changes-why-safe-mode
- “New internet laws pledged as social firms snub talks.” BBC, 2018, https://www.bbc.com/news/uk-politics-44188805
- Roose, Kevin. “To the fringe, being barred from facebook isn’t the end.” New York Times, 2018, http://go.galegroup.com/ps/i.do?p=AONE&u=oran33504&v=2.1&it=r&id=GALE%7CA552821239&inPS=true&linkSource=interlink&sid=AONE
- “The key moments from Mark Zuckerberg’s testimony to Congress.” The Guardian, April 11, 2018, https://www.theguardian.com/technology/2018/apr/11/mark-zuckerbergs-testimony-to-congress-the-key-moments
- Thornton, Bruce S. “Speaking freely: lose free speech, and lose our political freedom too.” Gale Pages, 2018, http://go.galegroup.com/ps/retrieve.do?tabID=T002&resultListType=RESULT_LIST&searchResultsType=SingleTab&searchType=BasicSearchForm¤tPosition=19&docId=GALE%7CA547757999&docType=Article&sort=Relevance&contentSegment=&prodId=AONE&contentSet=GALE%7CA547757999&searchId=R2&userGroupName=oran33504&inPS=true
- Toor, Amar. “Germany passes controversial law to fine Facebook over hate speech.” The Verge, 2017, https://www.theverge.com/2017/6/30/15898386/germany-facebook-hate-speech-law-passed.
- “Understanding the legal issues for social networking sites and their users.” FindLaw, 2017, https://technology.findlaw.com/modern-law-practice/understanding-the-legal-issues-for-social-networking-sites-and.html