The rapid advances in artificial intelligence are increasing the risks that users with malicious software can exploit the technology to organize automatic attacks, cause accidents in vehicles without a driver or convert commercial drones into weapons with remote control.
Reuters
A new study, published this week by 25 public policy and technical researchers from the universities of Cambridge, Oxford and Yale along with experts in privacy and military, sounded the alarm for the possible misuse of AI by enemy states, delinquents and lone wolves.
The researchers said the malicious use of AI represents an imminent threat to digital, physical and political security by enabling large-scale and highly effective attacks. The study focuses on plausible developments within five years.
"We all agree that there are many positive applications for AI," said Miles Brundage, a researcher at the Future of Humanity Institute in Oxford. "There was a target in the literature on the subject of malicious use."
Artificial intelligence, or AI, involves the use of computers to perform tasks that normally require human intelligence, such as making decisions or recognizing text, voice or visual images.
It is considered a powerful force to unlock all kinds of technical possibilities, but it has become the center of heated debates about whether the massive automation it allows could result in widespread unemployment and other social disturbances.
The 98-page document warns that the cost of attacks can be reduced by using AI to complete tasks that would otherwise require human work and experience. New forms of attacks can be produced that could not be launched by humans without the help of AI or that exploit the vulnerabilities of the artificial intelligence systems themselves.
The report reviews growing academic research material on the security risks posed by AI and appeals to governments and policy and technical experts to collaborate and mitigate these dangers.
The researchers detail the power of AI to generate synthetic images, text and audio to impersonate others in the network, in order to influence public opinion, pointing out the threat of authoritarian governments using such technology.
The report makes a series of recommendations that include the regulation of artificial intelligence as a military / commercial technology with a dual use.
It also questions whether academics and others should control what they publish or disseminate about new developments in AI until other experts in the field have the opportunity to study and react to the potential dangers they may pose.
"In the end, we end up with many more questions than answers," said Brundage.
The document was born from a seminar in early 2017, and some of its predictions were fulfilled while it was being written. For example, the authors speculated that AI could be used to create highly realistic fake audio and video from public authorities for propaganda purposes.