HIRE WRITER

Cybersecurity and Artificial Intelligence 

This is FREE sample
This text is free, available online and used for guidance and inspiration. Need a 100% unique paper? Order a custom essay.
  • Any subject
  • Within the deadline
  • Without paying in advance
Get custom essay

Artificial intelligence is technology that performs tasks independently without human interaction. The concept of artificial intelligence has been around since the fifties, and developers have been working to perfect it since then. There are many uses of this technology, including cybersecurity.

Cybersecurity is the protection against criminals accessing electronic data. According to Ricardo Calderon, there has been an increase in cyberthreats in the last decade. Even with implemented cybersecurity controls, “Cybercriminals have learned how to evade the most sophisticated tools…” (Calderon). Electronics have not been around for long, and developers are still experimenting with its features. It is hard to predict what will happen after running different devices. There is no way to monitor or analyze the performance of these agents (Charisi, et al.).

Governments and companies can use biased algorithms in order to maliciously gain success. Biased algorithms can mask discriminatory business practices (“Hackers, AI and…”). Corporations could announce misleading data leading to lower trust in competitors’ products. Political organizations have the ability to exclude people based on race, religion, or gender from target audience using these algorithms. With the use of biased algorithms in technology, comes the risk of segregation and discrimination. Most experts predict that this form of technology will lead to destabilized groups using lies, weaponized information, and propaganda (Anderson, et al.).

Networked artificial intelligence improves human effectiveness, but it also threatens developments that have already been made. Hackers cannot be stopped, so improving cybersecurity will also result in the hackers’ advancements. There also arises the risk of erroneous data, causing malfunctions in artificial intelligence technology. Although artificial intelligence has proved to decrease the risk of hackings, it has begun to bring doubt on itself through the increase of technological human error, use of biased algorithms, and chance of unemployment.

Even though security is advancing, cybercriminals are working harder to get through the barriers. Voice recognition technology has become a rising industry, and a new method for criminals to manipulate users. Criminals can expand on these new developments by disguising themselves as the voices of the computers, such that “…whenever a communication advancement like voice recognition starts to go mainstream, criminals looking to take advantage of it aren’t far behind” (Markoff). People are likely to get robbed of their personal information with the new methods hackers have developed.

They can mimic the voices of Amazon’s Alexa, or Apple’s Siri and spy on what is being said around the devices. It is difficult to tell whether a person is being spied on, which poses a threat to customers’ security. Machines may perform more complex cybersecurity operations in the future, but the systems’ ability to “arrive at their own insights” opens windows for error (“Machines v. Hackers…”). No matter how advanced cybersecurity gets, criminals will attempt to hack into these systems, and over time, will succeed in doing so.

Developers have been working hard to ensure that artificial intelligence is protected through very complex algorithms, however, this creates a larger possibility for errors. Before its prominence, the Intrusion Detection Systems (IDSs) performed the task of classifying intruders. While it is true that techniques have been developed and can improve the performance of IDSs, error is inevitable (Frank). This is because different features make it harder to detect skeptical behavior. This growing form of technology may improve other systems, but it cannot be error-free security since humans are still the ones developing the data.

Dr. Tom Murphy VII analyzed a method for automating a level of the game Super Mario Bros. He stated that a player being tested paused the game indefinitely in order to avoid losing. While some may think that the agent was clever for arriving to this solution, the concept of the game was not to find a way to avoid loss, but rather go through the game without cheating. This is a prime example of flawed systems. Machines are capable in learning a task but do not understand what they are doing. Humans can try to improve this, but more flaws will emerge. Among accidental errors lie another concern: biased algorithms. Biased algorithms occur when systematically prejudiced results are produced due to erroneous data.

According to Osaba and William IV, “…algorithms give the illusion of being unbiased but are written by people and trained on socially generated data.’ This is important because discrimination increases as companies begin to use biased algorithms to their own advantage. Organizations can filter their audience by using these algorithms. Devices require algorithms to function, and humans are responsible for inputting those algorithms. This responsibility offers humans the ability to manipulate consumers into believing false information with profit being the main incentive. They would achieve this by announcing faulty algorithms in hopes that competitors see it and decide to use it for their own devices. This results in inaccurate data in the competitor’s products, pushing consumers away from them.

Even though machine and deep learning, two types of artificial intelligence, are impressive methods for data analysis, their advancements are becoming a greater threat for employment. The purpose of artificial intelligence is to perform tasks that normally require human intellect, including analyzing and storing data. Jobs that specifically serve to store data accurately are at risk of being eliminated due to advancing technology. A doctor, by the name of Satya Ramaswamy, stated that there is a possibility of “a net job loss of between 4% and 7% in key business functions” by 2020 because of smart machines. Machines are developed by people who store and analyze data, which is prone to errors. Large companies that depend on humans for jobs will soon resort to robots, and unemployment will increase significantly: “47% of all U.S. jobs are at “high risk” of being automated in the next 20 years” (Joshi et al.). This is important because there is a large chance of Americans losing their jobs. Artificial intelligence may seem convenient for some workplaces; however, it may conclude in costly actions.

Although artificial intelligence is not often utilized in jobs, dependency on it has increased over the years, and may require employees to receive more costly training. With machines taking over jobs, people must train for tasks unfamiliar to them in order to avoid unemployment. According to a graph from Louis Columbus’s article, jobs requiring technological skills have been increasing since 2013. This means that in order for employees to be relevant in the workplace, they must go through training to become familiarized with technology. It is evident that human performance accuracy has been declining slowly over the past few years, while these systems have increased since 2010. This shows that the risk of unemployment, and the need for workers to receive more training for relative skills is increasing.

Jobs may not want to lose valuable employees, but technological dependence will require extensive training, costing the job more than letting go of its workers. A review by H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino states that “Companies that deploy advanced AI systems will need a cadre of employees who can explain the inner workings of complex algorithms to nontechnical professionals.” Workers who are unfamiliar with the systems may be at risk of losing their jobs to those who understand the technology. With the replacement of old employees, the unemployed may struggle to find a job that requires their skills.

While having stricter regulations for companies using artificial intelligence systems may prove to reduce the use of biased algorithms, there is still a possibility of corporations finding a way around the regulations. There may also be more complex algorithms implemented in order to reduce the risk of hackers, however, cybercriminals may work harder to break through the barriers. People must realize that with technological development drastic changes are to follow, and thus create problems with harder solutions.

It is easy to see the benefits of artificial intelligence, such as an increase in human convenience; however, the drawbacks are not as widely discussed. Devices such as Google Home and Amazon’s Alexa provide cybersecurity, which is protection against cybercriminals, but it is not as efficient as advertised. Monitoring the performance of technology is impossible, and devices can be unpredictable. Since electronics are developed by humans, there will always be error. These errors may result in criminals successfully hacking into devices and stealing information and biased algorithms can give companies a competitive edge.

Technological advancements have also become a threat to employees. Even though it is not used as commonly in jobs, the dependency for artificial intelligence has increased. About half of the jobs in America are expected to rely on artificial intelligence such as robots. Along with rising unemployment, it is crucial for employees to learn how to handle electronics. The increasing use of smart devices in society would increase hacking attempts. While artificial intelligence may be efficient in storing and analyzing data, it is not completely accurate. Artificial Intelligence is a growing concept and cybersecurity may seem like guaranteed protection, but people should not fully trust what is simply shown to them.

Cite this paper

Cybersecurity and Artificial Intelligence . (2021, Feb 24). Retrieved from https://samploon.com/cybersecurity-and-artificial-intelligence/

FAQ

FAQ

Is AI the future of cybersecurity?
AI is definitely a key technology in the future of cybersecurity. By leveraging machine learning and other AI techniques, security systems can better detect and respond to threats in real-time, ultimately protecting organizations from cyber-attacks.
Is Artificial Intelligence better or cyber security?
Artificial Intelligence can be used to improve cyber security by identifying and responding to threats faster than humans can. However, AI can also be used by hackers to create more sophisticated attacks.
Is cybersecurity related to artificial intelligence?
Yes, cybersecurity is related to artificial intelligence. Artificial intelligence can be used to create and interpret patterns in data, which can be used to improve cybersecurity.
We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Hi!
Peter is on the line!

Don't settle for a cookie-cutter essay. Receive a tailored piece that meets your specific needs and requirements.

Check it out