It can be observed in a plethora of instances in popular culture that technology (a robot being the case much of the time) is portrayed to be a sort of monster or possess and convey monstrous qualities. People have created futures of an apocalyptic robot takeover on humanity, with humans becoming slaves to robot overlords. An argument may be made that as society continues to pursue advancements in technology and artificial intelligence (AI), these imaginary futures come ever closer to becoming reality. There is quite a number of people that have a genuine fear of robots taking over the world, and therefore refuse to adapt to our technological world.
These “technophobes,” as they have come to be called, not only fear a robot apocalypse, but, in a more realistic sense, they also fear machines taking over their jobs, leaving them unemployed and financially unstable. However, do these fears really hold any merit? For the sake of argument, I will argue that the monster metaphor should not be applied to the fear people hold over technology, as it is completely irrational. This paper will address all of the above. First, it will lay a foundation on what exactly technophilia and technophobia are. From there, this paper will discuss how these conditions may be observed in the real world. Lastly, this paper will discuss how these fears do not hold much merit, why people should not really be afraid of technological advancements, and why technophobia is simply an irrational fear.
Technophilia and Technophobia
According to Osiceanu (2015), the rapid growth in a variety of technological fields was a cause of “the appearance of the psychological ambivalence, because, modern technologies, generate, in the same measure, comfort and disasters” (p. 1138). “Comfort” and “disaster” can be associated with two terms regarding people’s acceptance of technology: “technophilia,” and “technophobia.” Osiceanu (2015) states that “The person attracted to technology, the ‘technophile’, takes the most or all technologies in a positive manner, enthusiastically adopting new forms of technology and view this as a way to improve his living conditions and combat social problems” (p. 1138).
A technophile, as the term would imply, does not fear the technological advancements of society, and they enjoy using technology for its many benefits. On the other hand, Osiceanu (2015) also acknowledges that “with the continued proliferation of modern technologies in almost every aspect of our existence, the number of people who manifest fear of them is increasing” (p. 1138). Those who have this fear manifesting in them would be regarded as “technophobes.” Technophobia is the “fear, dislike or discomfort by using modern technologies and complex technical devices” (Osiceanu, 2015, p. 1137). In the minds of technophobes, technology and artificial intelligence are monstrous entities that will only bring about negative consequences if society continues to advance them.
Not to say that the following famous figures in the technological and scientific fields are technophobic, such as Elon Musk and Stephen Hawking, because these two are far from that, but they have been quoted to harbor some degree of fear over the advancement of artificial intelligence. Elon Musk has been quoted to say that advancing artificial intelligence is “summoning the demon,” and Stephen Hawking stating “the development of full artificial intelligence could spell the end of the human race” (Allenby, 2018, p. 256). The fact that some figure-heads in the scientific and technological fields harbor these feelings about the advancement of artificial intelligence is indeed quite troublesome and does help the technophobes’ case that artificial intelligence is a monstrous entity.
Unfortunately for technophobes though, as time progresses, it will be increasingly difficult to function daily without accepting technology. Because society is much more technophilic than technophobic, society will continue to pursue advancements in technology, forcing technophobes to either adapt or be left behind. In fact, a finding by Khasawneh (2018) shows that the U.S. spending on technology in recent years was approximately $1.5 trillion (p. 210). The astronomical amount of money that the United States has spent on technological advancements explicitly shows that society is much more technophilic than technophobic. Therefore, it would seem that technophobes will undoubtedly be required to adapt to technology.
Manifestations of Technophobia
There are some different ways in which the phenomenon of technophobia manifests itself: a) the more dramatic, monstrous, and apocalyptic manifestation of technophobia that robots and artificial intelligence will one day take over the world, and b) a more realistic, present-day manifestation of technophobia, in which robots and artificial intelligence will take job positions from humans (Chen, 2014; McClure, 2017). Because of the portrayals of robots in popular culture, parts of society have associated robots with monsters. Consider the following example posed by Chen (2014):
Imagine a machine programmed with the seemingly harmless, and ethically neutral, goal of getting as many paper clips as possible. First it collects them. Then, realizing that it could get more clips if it were smarter, it tries to improve its own algorithm to maximize computing power and collecting abilities. Unrestrained, its power grows by leaps and bounds, until it will do anything to reach its goal: collect paper clips, yes, but also buy paper clips, steal paper clips, perhaps transform all of earth into a paper-clip factory. ‘Harmless’ goal, bad programming, end of the human race. (para. 3)
A common reasoning for technophobes to justify their fears is that they believe the artificial intelligence of a system can and will skyrocket until it becomes unstoppable. Chen (2014) goes on to state that “once you reach a certain level of machine intelligence, and the machine becomes clever enough, it can start to apply its intelligence to itself and improve itself,’ which is generally called “self-improving AI” (para. 25). If the AI’s intelligence drastically increases within a short period of time (hours, or a few days), this is called a “hard takeoff,” and humans will essentially be helpless to stop it, since we would not be able to anticipate what the intelligence does next (Chen, 2014, para. 26). Technophobes cite the above research to prove their point that we should not trust artificial intelligence, and if we continue our technological advancements, the monstrosity that is artificial intelligence will cause our demise. While this notion seems plausible to many, it simply is not the case; this will be touched upon later.
Another manifestation of technophobia, that is quite different from Chen’s focus, in the sense that it is not as extreme, is the notion discussed by McClure (2017) that technology and artificial intelligence will take the jobs of humans. McClure (2017) notes that after the Great Depression, “economists have been mostly optimistic about the positive relationship between employment and technological advances” (p. 140). However, this optimism has recently been a victim of criticism due to people fearing that machines will take over their jobs. McClure (2017) also notes the studies of Oxford researchers, which state that:
In their study of more than 700 different U.S. occupations, within two decades, 47% of today’s jobs will be susceptible to automation by computerization and may become obsolete. These jobs span the blue- and white-collar divide, from truck drivers and warehouse workers to accountants, loan officers, health-care managers, and paralegals. (p. 140)
The finding that nearly half of all jobs may be computerized is an astounding figure. Although it is not a guarantee, it means there is a chance for perhaps millions of Americans to lose their jobs to computers. When looking at these figures, it does make sense for technophobes to fear for their jobs. However, the key word to consider is “may.” The aforementioned jobs may become computerized. This is by no means a guarantee, and I believe that from an economic standpoint, companies would not replace their employees with computers, as the economy would quickly crash.
After conducting a study of his own, McClure (2017) found three different results. First, he found that “statistically, females, non-White minorities and those with the least amount of education are more likely to fear these developing technologies” (p.152). McClure (2017) says that although promises have been made by leading figures in the technological fields that only positive outcomes will come from new technology, some people are still fearful that they will be replaced in the workplace by technology, and this may be a result of not having as much exposure to educational technology (p. 152). A second finding by McClure (2017) states that “the population of technophobes exhibit higher than average anxiety-related mental health issues” (p. 152).
He then explains that since technophobes, as the name would imply, exhibit disproportionate fears of technology and are not simply a “subgroup of generally fearful people,” we can better understand if technophobia is “correlational or having the additive effect of sustaining anxiety-related health problems” (McClure, 2017, p. 152). Finally, McClure observed that “it is possible that these concerns spur on other fears, thus leading to a cyclical or feedback problem whereby various fears compound and create new obstacles for individuals” (p. 152). What he means by this is that essentially, people may really fear unemployment and financial troubles, and subsequently fear machines causing their unemployment. This wave of technophilia causes technophobes’ worst nightmares to be lived, so they can not help but to associate technology with monstrousness.
Khasawneh’s (2018) research is similar to McClure’s (2017) in regard to workplace manifestations, but instead on a company-wide scale, rather than a singular employee scale. Not only can technophobia affect individual people, but it may affect companies and organizations as well. According to Khasawneh (2018), “technophobia is a barrier to company’s development; it is a major factor in hindering employees’ adaption to new technologies (Rosen & Weil, 1995) since 20%–33% of Americans could be classiﬁed as technophobes (Celaya, 1996)” (p. 211).
Although this information is especially dated for this topic of discussion, it is still relevant. It is certainly possible that those who were a part of the study are still part of the workforce today, and since technophobia, like any phobia, is difficult to overcome, these people may still have technophobic qualities. Since companies and organizations implement new technologies into the workplace on a frequent basis, if a certain amount of their employees is deemed to be technophobic, then the results could be quite poor for the company (Khasawneh, 2018, p. 211). Because technophobes have already painted a picture in their mind that technology and artificial intelligence is nothing but monstrous, they would refuse to work with the very thing they hate.
Irrationality of Technophobia
Having discussed the background of technophobia and its manifestations in society, one might ask if this technophobia and anxiety of artificial intelligence really has any merit. I feel that the work of Johnson & Verdicchio (2017) addresses and answers this question efficiently. According to Johnson & Verdicchio (2017), while there are good reasons to be afraid of artificial intelligence, technophobes fear for the wrong reasons. They state, “much of the fear and trepidation is based on misunderstanding and confusion about what AI is and can ever be” (p. 2268). What technophobes fail to realize is that there are strict limits on what AI programs can achieve, based on the software and hardware that they are built for.
AI programs are simply lines of code. For AI programs to have any value, they need to be implemented into computer systems that perform operations that are beneficial to humans (Johnson & Verdicchio, 2017, p. 2268). AI programs must be distinguished from AI sociotechnical systems, which are those lines of code in AI programs that are combined with the context in which that code operates (Johnson & Verdicchio, 2017, p. 2268). An example the authors use is the case of a trading program and a stock exchange. A trading program on its own is an artifact, but combined with a stock exchange, the program operates based on human behavior (Johnson & Verdicchio, 2017, p. 2268).
So, how does this relate to technophobia? Well, essentially, technophobes are unable to distinguish AI programs from AI sociotechnical systems. It is crucial that the difference between the two is realized in order to prevent confusion in technophobes. Technophobes believe that simple artificial intelligence would be able to improve itself indefinitely, but in reality, they are limited on what they can do, due to their programming. According to Johnson & Verdicchio (2017), “AI anxiety generally results from an exclusive focus on AI programs leaving out of the picture the human beings and human behavior that create, deploy, maintain, and assign meaning to AI program operations” (p. 2268).
Technophobes have what is referred to as “sociotechnical blindness,” which causes them to fail to recognize that “AI is a system and always and only operates in combination with people and social institutions” (Johnson & Verdicchio, 2017, p. 2268). Simply put, AI programs do not have the ability to go rogue, because they are pointless without being implemented in some way to benefit humans.
Another wrongful reason for possessing technophobia is due to a misunderstanding of what autonomy means in the computational world. Autonomy in the technological sense is quite different from what the term means in the context of humanity. Johnson & Verdicchio (2017) state that “for humans, autonomy refers to the characteristic of having the capacity to make decisions, to choose, and to act” (p. 2268). Johnson & Verdicchio (2017) then go on to explain that “only beings with autonomy can be expected to conform their behavior to rules and laws.
It is this notion of autonomy that comes into play in the fear and concern about ‘autonomous’ AI” (p. 2268). When referring to autonomy, a nonexpert generally associates autonomy in the human sense with the technological sense, which is completely different. In the computational world, autonomy refers to the creation of data based on the parameters inside the machine (Johnson & Verdicchio, 2017, p. 2269). The authors use the example of the AI program built to play chess to prove their point. Essentially, an autonomous AI program, such as the chess playing AI, is autonomous in the sense that its creator can not predict the outcomes of the program (Johnson & Verdicchio, 2017, pp. 2268-2269).
However, this is not to say that this unpredictability would end with the AI becoming a threat to humanity. Instead, this suggests that there are only certain limits that an AI can reach, based on the setting it is implemented. An AI that is built to play chess will only know how to play chess and be able to improve its ability. The AI would not be able to do anything else, because its code, software, and hardware only allow it to do a specific set of actions. It is because of these limits that technophobes should not use the aforementioned reasons to justify their technophobia and believe technology and artificial intelligence to be monstrous. Technology and artificial intelligence are not and cannot be monstrous entities, as they are only built to perform a specific set of tasks. Instead, the monstrous label should be placed somewhere else, and Johnson & Verdicchio (2017) pinpoint exactly where this label should go, as they state:
The target of our anxiety should be the people who are investing in AI and making decisions about the design and the embedding of AI software and hardware in human institutions and practices. The target should be those who decide when AI programs and systems have been adequately tested, those who have a responsibility to ensure that AI does not get out of control. (p. 2270)
Think of AI as an animal in a zoo habitat. While the animal is in its habitat, it is free to whatever it wants, but must remain within the confines of its habitat. The same goes for artificial intelligence. It’s free to do what it wants, but only within the confines of its software.
As it has been observed, technophobia can indeed cause many problems in society. It causes widespread anxiety in the public as they are left in the dark about the advancements being made in technology until that new technology is released. People worry for the security of their jobs, as they are afraid that AI will replace them. Even worse, people fear of AI going rogue and causing an uprising in which humanity becomes either obsolete or eradicated. Thus, the monster metaphor has been harshly connected to technology and artificial intelligence.
However, the “monsters” in the case of AI is not the intelligence itself, but rather the people who create the intelligence to be unpredictable and dangerous. Technology and artificial intelligence in both its current state, as well as in the future, is only built to help us. We are not creating monsters, we are creating tools to advance society and change the world. Therefore, this monstrous portrayal of technology and artificial intelligence should be erased, as it can only do what it is built to do.
Although I did have a sufficient number of sources at my disposal to conduct my research on this topic, there were also limitations on my research, such as time and quantity of sources. In hopes of leading further research, I suggest that research be continued where there are more sources readily available, such as in a large university or research center. More specialized sources and databased would certainly prove to be fruitful. Also, more time spent conducting and analyzing research may also yield more substantial results and information than my own, due to my time limits.