Could Our Worst Fears of Artificial Intelligence Come True?

Humans are inherently scared of things that are just like us, but not quite. Slender Man, ghosts, “creepy” dolls, clones, Frankenstein’s monster, Dracula, you when you’re hungry (a little Snickers reference there) … to name but a few. Is it the fear of the unknown? Is it the uncertainty of behaviour that’s romanticised in horror films? Whether you love or hate fear-inducing films one thing that terrifies us all is Artificial Intelligence gone rogue.

Since technology started to move in leaps and bounds towards robotic creatures and the world of AI, everyone has had a nagging sensation. One could sum it up in two words: I, Robot.

What is Artificial Intelligence?

Let’s scale back for a second. For those of you that have found yourself on this page and are still a little confused and slightly scared (weapons, death, humans being superseded). What is Artificial Intelligence?

Artificial Intelligence is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages” – thanks, Google.

In layman’s terms, it’s the research goal to create technology that allows computers and machines to function in an intelligent (human) manner.

Why is Artificial Intelligence So Dangerous?

Well, great question reader! The technologists at MIT published an article last year titled: “The Dark Secret at the Heart of AI” that outlined – quite regrettably – how not a single mind on the planet really “knows how the most advanced algorithms do what they do” and that it “could be a problem”. Could be a problem? Quite the understatement… how can you control something if you don’t know how it works?

At what point will Artificial Intelligence supersede human intelligence? Even Steven Hawking warned that AI will either be the best or worst thing to happen to humanity, claiming that it could be the last major event in our history. He went on to say, “ I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It, therefore, follows that computers can, in theory, emulate human intelligence – and exceed it.”

Are we creating something that could usurp our own existence as the dominating species?

The Rise in AI and the Fear of “What If”

There are multiple examples of harmless Artificial Intelligence – some that we already use in our day-to-day lives – voice and facial recognition, for

instance, as well as language translation. However, that same technology, known as “deep learning”, has also been used for some pretty scary projects. In 2016 a self-driving car was released onto the roads of Monmouth County, New Jersey. This self-driving car was unlike anything before – it didn’t follow a single instruction given by engineers or programmers. This self-driving car simply knew how to drive thanks to an algorithm that had taught itself by watching a human driving. Deep learning, huh.

This is where the fear of “what if” comes into play. This self-driving car is great while it’s stopping at red lights and giving way at junctions, but – what if – one day it doesn’t? What if one day it runs a red light and knocks someone down? Since no one knows how it works we wouldn’t know how to isolate code during instances of failure to be examined and rectified to prevent future instances. And this is just one instance, what if we’re talking about bots tasked with security, operating large machinery or even pharmaceutical doses? So, what we’re saying is, the technology is there to be used but it still can’t be trusted.

 Artificial Intelligence Isn’t All Bad

Of course, there is the bright side of AI. After all, why would we be ploughing so much time, effort and resources into developing it if not?

Creating New Jobs

Although AI is likely to eliminate a lot of jobs, some believe it could create even more new careers. 20 years ago, we could never have foreseen the smartphone and social media based society in which we now live – whole industries (digital marketing, web developing etc.) have been created to sustain it. That argument allows us to assert with full confidence – an AI future could create whole industries that we’re not yet aware of.

Protecting Us Online and in the Real World


What about an astronomical reduction in road traffic collisions if AI self-driving cars catch on? Did you know that 95% of automobile accidents occur due to human error?

We’re thinking unmanned aircrafts and soldiers in the military. We’re thinking robot guards and lie-detecting board patrol. And that’s just the physical. Artificial Intelligence could help increase surveillance for our protection, protect nefarious surveillance and protect our privacy. They could help to protect our data such as healthcare records, secure financial transactions and even detect digital weak spots before hackers do.

Improving Healthcare

Even the most experienced healthcare professional struggles to sieve through the catalogue of diagnoses based on a few symptoms – a task that would be easy for an AI system. It’s not out of the question for AI to also track our current health in real-time – it could prompt 999 calls before you’ve even noticed an irregularity in your heartbeat.

Revolutionising Agriculture

Autonomous systems are already being used to revolutionise agriculture, from planting seeds to fertilizing crops and administering pesticides. Drones are widely used as they are cheap to purchase and easy to use, these monitor crops and collect data. That data can be analysed by Artificial Intelligence systems for all sorts of variables.

And it goes without saying that this isn’t an exhausted list, it’s merely the ones that I could be bothered to write about. Once you start to consider the limitations of Artificial Intelligence (there aren’t any, really) then you’ll realise the expanse of lives and industries it could touch and benefit.

Is AI the Ultimate Weapon?

Flipside, flipside, it’s time to look on the flipside. What about intent?

Well, when discussing our fears of AI we tend to dwell solely on the unintended side effects or results. Whoopsie we made a super intelligent robot that doesn’t have a conscience (or know its strength!); whoops we made a criminal sentencing algorithm that soaked up racist bias in training; whoopsie we put 40% of the population out of work because we created a more efficient and cheaper alternative to their labour (its estimated that 50% of current American jobs won’t exist in 20 years’ time due to automation and deep learning advancements). What about the intended harm that AI could cause?

Talking about the accidental risks of AI is only one side of the conversation. What about the large population of people that will want to use the technology for criminal, immoral or malicious purposes?

Won’t these people be trying to get their hands on Artificial Intelligence to realise their intentions? Well, for lack of another answer, yes. And everyone else thinks so too.

A bunch of industry experts, including great minds from the Future of Humanity Institute, the Centre for the Study of Existential Risk, and the Elon Musk-backed non-profit OpenAI, all said yes. They had such a resounding yes, in fact, that they published a report titled: “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation”.

In the report, these minds outlined the way in which AI could be used with intent to harm us.

Digital Threats

Let’s start with digital. Take spear phishing as an example, the process by which individuals are contacted with fake emails or messages intending to trick them into giving up sensitive information like passwords and bank details. Artificial Intelligence could easily do a lot of groundwork here, such as mapping out social and professional networks and then generating messages based on this data.

Right now, a lot of time and effort is going into creating realistic and engaging chatbots; (I think every bank site has one “Hi, need some help?”). This kind of technology, once working at its best, could easily pose as your best friend and one day really want to know your email address and password – the same ones that are used for your online banking.

greg-rakozy-38802-unsplash (1)

You wouldn’t fall for that, surely? Let’s not forget that this type of phishing email attack was responsible for the iCloud leak of celebrity pictures in 2014 (#TheFappening) alongside the hack of Hillary Clinton’s private emails that not only influenced the 2016 US presidential election but fed into a range of conspiracy theories like Pizzagate (which nearly got people killed).

But you wouldn’t fall for that right? You’re not a middle-aged tech-illiterate government official. What if your best friend sent you a very convincing voice note that sounded just like them and used the same words and turns of phrase? Well, the Lyrebird claims it can recreate any voice using just one minute of sample audio. Okay, okay, you’d have to see it to believe it, right? What if Artificial Intelligence could create videos of your best friend too? It’s a slippery slope, and once the software is made its there to use time and time again.

Physical Threats

Now there are the unsavoury physical threats to consider. The scenarios that could occur are pretty much endless, however, we’ll use the one outlined in the report for ease. Imagine a terrorist implanting a bomb in a cleaning robot that it inadvertently smuggles into a government ministry. The robot could use its inbuilt machine vision to locate a specific person of importance before the bomb is detonated. This is just one in a thousand ways that AI could harm us, it’s becoming very real and very scary.


The report did outline preventative measures and ways in which we can help avoid this style of attack. These included:

  • Artificial Intelligence researchers being open and clear about the threat that their work could cause when used maliciously;
  • Advising the AI world to pair with cybersecurity to work together to protect digital systems from attack;
  • “The God complex” – creating ethical frameworks to be abided by;
  • Ensuring policymakers are educated by technical experts regarding AI treats;
  • Involving more people in the discussion, not just researchers and scientists.


Is this enough though? Will these measures cover the thousand and one ways that AI could harm us, both with and without intent? Shouldn’t there be 6 advisory points – the first being to work out how AI works in the first place? It’s a scary prospect but exciting all the same. I’m sure people were having a similar existential crisis when social media platforms and smartphones were booming and look how that turned out – a wonderfully connected society that loves communicating. One that has a loneliness epidemic, so much so that in the UK we’ve appointed a “Minister of Loneliness” in government to tackle the problem.

Cheers to the future of Artificial Intelligence!