From personal assistants like Siri to self-driving cars, artificial intelligence has been making news a lot lately—and so have questions about what it may mean for the future of everything from business to national security. With visionaries predicting that AI may someday be able to rid the world of hunger, poverty and war, others are voicing concerns about issues like privacy and public safety. One of the naysayers, the late Stephen Hawking said, “Artificial intelligence may be the worst thing to happen to humanity”.
Since there’s no predicting how far AI will go—and if or when it will eventually surpass human intelligence—it’s difficult to know what the future holds in store. But already experts are talking about the security risks, and companies are taking note.
1) AI can be used to create cyberweapons.
In a report published by Cambridge University in February of this year, a group of experts from the U.S. and U.K. warned of the potential of computers using speech technology to impersonate targets or carry out “superhuman hacking” of drones, smart cars or weapons systems—with devastating effect. Warnings like these have led AI visionaries like Elon Musk, Tesla CEO, to call for tight regulations to keep AI from being misused and falling into the wrong hands.
2) AI will make hacking more sophisticated.
Recent research has determined that bots can find certain bugs in computer systems a lot faster than humans can—and that hackers can use this AI-driven technology to scan system software for vulnerabilities and then exploit the computers in ransomware-like attacks.
Hackers could also start using artificial intelligence like financial firms to automate tasks like payment processing so they could collect ransoms more quickly. They could even create chatbots to talk with ransomware victims—and target multiple individuals at once without having to actually personally communicate with them.
But cybersecurity firms are looking to machine learning, a type of artificial intelligence, to counteract these threats. They’re using algorithms as a way to detect patterns of abnormal computer activity in an effort to block malware from wreaking havoc on systems.
Artificial intelligence also has the potential to make phishing attacks a lot worse. Cybercriminals could use individuals’ online information, obtained from social networks like Facebook and Twitter, to automatically create enticing emails and send them from fake accounts that mimic the writing style of friends so they look authentic.
3) AI will spread misinformation and propaganda.
With all the talk about fake news and the recent election, this is getting a lot of play in the media right now. Not only have bots planted fictitious information on social media sites like Facebook and Twitter, but AI has been reported to have been used to fabricate audios and videos of political figures that look and sound like the actual people in an effort to sway public opinion. And the risks of misinformation being used to harm organizations in the business world are just as great as in the political arena.
Where this will all lead is anyone’s guess. One thing’s for sure, though: while the brave new world of AI is only in its infancy, it’s not too early for companies to grapple with all of the ethical, social and economic issues it raises.
Addressing the risks now
While the risks posed by AI can’t be eliminated entirely, they are manageable. The best way to provide adequate safeguards is to regulate the AI itself, just as Elon Musk and others have recommended. Measures like requiring the development of testing protocols for the design of AI algorithms and setting up input validation standards could certainly be helpful in this regard. And, of course, beefing up cybersecurity protections with the most advanced security technology is important for every enterprise doing business today.
Artificial intelligence is an exciting new frontier with lots of positive potential—as long as the human intelligence behind it keeps thinking ahead.