Baldwin, J.R. (2014) argues that cultural studies is different from mainstream rhetorical analysis (which focuses the themes of a text, i.e. the construction of a message to wit, the what and how) in that cultural studies looks at social and historical structures (the why). Culture ‘as the site where meaning is generated and experienced, becomes a determining, productive field through which social realities are constructed, experienced, and interpreted’ (Turner, 1990). ‘Site’ here hints at a public sphere of struggle. Popular culture (art, internet memes, etc) are not merely cultural representations or artifacts; they are vehicles of social meaning.
Reception analysis has the understanding that context influences the way audiences view or read media. This theory forms a special part of public studies that try to examine in-depth the actual process by which media discourse is assumed through discourse practices and audience culture. Appearing in the 70s by Morley (1970), reception analysis tries to understand meaning, the relationship between content and mass media and audiences. By this theory, the audience is seen as an active interpreter, proposing that texts and their recipients are complementary elements of an object of inquiry which thus addresses both discursive and social aspects of communication.
Reception studies assumes that there is no ‘effect’ without ‘meaning’, where in this case the audience reinterprets the message conveyed by the memes and the meaning that the audience will have will generate a variety of effects, and this effect is the final stage of this theory. According to Denis McQuail (1997), reception analysis emphasizes the use of media as a reflection of the socio-cultural context and as a process of giving meaning to an experience and cultural production, culture and media experience in the public environment influence the process of public acceptance of media messages.
Internet culture is more about the shared values and perspectives created and maintained in various online settings and perspectives that guide norms and ideals for how to act and interact with other individuals in the digital sphere. Therefore, researching on Internet culture means studying these norms, ideals, values and perspectives – what people do online, what they think about what they do and what underlies their online actions and interactions.
In the same way as the Internet has become more diversified, so have its users. What was once a homogenous group of mainly technically focused users is now a multiverse of an almost infinite number of different, partly overlapping online sub-cultures. So even if it once may have been possible to talk about one or a few Internet cultures, with a shared linguistic style and similar norms and values, this has become increasingly impossible (Malin Sveningsson, 2007). There is a multitude of different cultures out there, with the only common denominator being that they happen to be found online.
Literature and Research Review
For a long time, the variety of communication opportunities offered by the Internet has been discussed above all as an opportunity to provide citizens with more opportunities to participate in public debates and the political process. Meanwhile, political participation and interests have decreased significantly. Skeptical questions are asked. In many cases, the prevailing view is that user-contributed areas of the Internet – columns of comments from the media, forums, social media such as Facebook or YouTube – dominate manipulative, offensive or even hateful content and a rational discourse on political issues is hardly possible. (Emmer, 2019).
Yucel (2017) highlighted hate speech, as many other writers on the subject have, as “In social networks and the comment columns of many online media discussions are increasingly disturbed by posts that offend people because of their origin, skin color, religion, sexuality or gender.” This is problematic, as it limits hate speech to comments, and textual content on the internet, leaving out two important considerations being the originators and the graphic hate speech materials.
More often, the approach to dealing with hate speech on traditional media has been on the model of ethical consideration. So one is encouraged to consider the effects of their communication and decide if it is going to hurt others or not. This approach often leaves the responsibility of dealing with such content on the victim or target to seek legal redress.
As discussions on hate speech often depends on the academic scope within which it was examined, such legal, communication, sociological, etc., the counter argument gets situated in the context of freedom of speech or other modes of expression, multiplicity of laws, treaties and conventions and their application in certain sovereign jurisdictions.
Related Research and Projects
The pioneering NOHATE project by Professors Emmer and Trebbe of Freie University Berlin, which looks at digital hate, especially hate speech against refugees and migrants, with trend analysis that alerts the moderator when there is any such content and also shows the moderator which strategies were deployed successfully in similar cases forms an important basis to track trends in digital media. The same can be deployed for political content, although it also appears not to take visual content into consideration.
Brown (2017) discussing “Hate Speech, the myth of Hate” premises the discussion on state regulation of speech on whether or not the speech in question is insulting, degrading, defaming, negatively stereotyping, inciting negative affection, discrimination or violence against people in virtue of their race, ethnicity, nationality, religion, sexual orientation, disability, gender identity, for example. He further argues that it makes a positive difference because such speech implicates issues of harm, dignity, security, healthy cultural dialogue, democracy, and legitimacy. This is the very definition of hate speech and the context within which it becomes a vital phenomenon that must be thoroughly researched and highlighted.
Having established this premise, Brown strongly makes the case that the term “hate speech” is mostly defined in legal terms, and as this gradually shifts into everyday use, the meaning will become empty, thus, “There is always a chance, …hate speech will be stripped of its original, legal-technocratic meaning to such an extent that it becomes merely an empty vessel; a generic term of disapproval, and it will remain useful so long as it can be used to do more than merely signal disapproval.”
Smedt et al (2018) begins by stating that hate crimes have been on the rise, and online social media are believed to act as a propellant for polarization and radicalization. This position brings to the fore once again, the legal context as the parameters for defining hate speech, and in their case, criminalizing it as well. Their argument that online social media can function as “echo chambers” (Colleoni, Rozza & Arvidsson, 2014), lending themselves to the expression of more radical views than face-to-face interaction, as such, hate speech is perceived to infiltrate various types of (mainly) political discourse online aptly captures the very nature of this research proposal, in balancing the triangle of hate speech.
Recognizing the gravity of the pervasion of hate speech and the challenges it presents, they aimed their study to identify common features of hate speech across domains, and to advance automatic detection by defining 8 social media texts of jihadism, right- and left-wing extremism, racism, and sexism using text classification, keyword extraction, collocation extraction and stylometry and sentiment analysis.
Primary among their findings was the fact that not all online hate speech necessarily involved hateful language (as text). Some instigators simply share news updates (mainly crime reports), but only if these reinforce their worldviews, and then they will do so perpetually. The sentiment analysis also showed that many users that post hate speech are angry, for one reason or another. This was manifested in a negative intensity in their language use.
Research Gaps Identified
Most researchers on hate speech content on all media platforms have limited their study to identifying and highlighting the textual content. This is evidenced in one of the most recent works published on the topic by MacAvaney et al (2019) studying the challenges and solutions in hate speech detection. They deployed a multi-view support vector machine system which allowed them to track hate speech content in a way that broached the keywords on multiple levels, even then, it was limited to textual content.
With such concentrations on textual content as the bane of analysis of hate speech, there is the apparent neglect of visual content and the reaction and responses elicited from the public. This is very critical and yet often overlooked because of the format of such content. Visual content cannot be programmed for detection, and reaction and responses are ignored.
Some of such content may be detected indirectly when the caption (text) accompanying such a visual gets a hit from any of the detection systems. Even then, the likely development is that the text is captured and analysed and classified by the artificial intelligence while the visual is neglected.
At best, some studies such as Sadat’s (2019) “Hate Speech in Pixels” presentation attempted to use Onscreen Character Recognition (OCR) algorithms to detect hate speech on internet memes. This again was limited to the text within the memes, so that the image itself which premises the meme is ignored, or becomes a secondary analysis.