The real danger of AI is distinctly human
When you think of AI being dangerous, you probably think of the classic “killer computers” portrayed in movies. Skynet, HAL 9000, whatever that evil computer in I, Robot was called - you know the ones. The idea of a computer system becoming self-aware and attempting to overthrow mankind is a long way off (we hope), but there’s an emerging threat posed by AI which can’t be ignored: the human element. Namely, what happens when powerful machine-learning AI (and the myriad valuable data it acquires) falls into the wrong hands?
A recently published report on “The Malicious Use of Artificial Intelligence” goes into great detail on the potential nefarious applications of AI.
One thing that AI has proven itself rather capable of is pattern recognition. This is a skill being utilised by security experts to spot malicious code and hacker intrusions. In their 2018 Cybersecurity Report, Cisco revealed that more than half of all Internet traffic is encrypted. Human analysts may not be able to “read” encrypted data, but machine learning AI is capable of observing patterns in that data, or anomalies in those patterns. This pattern recognition can be used to highlight malware which would be otherwise undetectable.
However, the flipside to this innovative use of digital intelligence is that any technology used for security can be used to counter that security. Just as experts can use AI to track down anomalies, hackers can use that same AI to locate and expose weaknesses in security systems. The IT security landscape is a cat-and-mouse game where each side looks to exploit any advancement made by the other, and AI is especially vulnerable to that exploitation. Take facial recognition for instance. When applied to a CCTV camera network, this tech has immense value in tracking down criminals. However, if facial recognition were to be applied to a drone, a harmless flying machine is suddenly a dangerous spying device. And if that drone were to be armed, it could become a lethal target-seeking weapon.
Real fake news
The way we consume news has changed radically within the last decade. One unpleasant side effect of this shift is that the lines between fact and fiction are often blurred by those with an agenda to push. Facebook has come under a barrage of fire over “fake news” in recent times, but the heart of the issue extends to the ease with which content can be fabricated in order to manipulate people’s views and opinions. Twitter faces a daily battle against an army of bots showing their support for the flavour of the month. Biased news outlets disparage their opposition with stories ranging from dubious to outright false. Technology helps to create convincing fakes such as misleading Photoshops or “deepfake” videos.
The question is - can an AI learn to create fake news? Through machine learning, a computer could analyze the worst that the fake content epidemic has to offer, turning itself into the ultimate “propaganda machine”. In a culture where the loudest voice is often the only one that is heard, such technology could be devastatingly disruptive.
The Malicious Use of AI report makes it clear that artificial intelligence is a double-edged sword, and AI developers must take steps to ensure misuse is kept to a minimum. Such technology must be engineered to be less exploitable, and laws and regulations must be put in place at an international level to govern the responsible usage of AI. For now, it seems only time will tell if advances in computer brains truly pose a threat to our human ones.
Share this on social media:
We support Fusion People with their IT and telecommunications. Watch how they made an annual saving of 40%.
Contact us today for help or advice on your IT & telecoms and receive a FREE Costa!
The player supports TAB to change the controls. Update Required<br/>To play the media you will need to either update your browser to a recent version or update your <a href='http://get.adobe.com/flashplayer/' target='_blank'>Flash plugin</a>.