From MIT Technology Reiew:
The possibility that a malevolent artificial intelligence might pose a serious threat to humankind has become a hotly debated issue. Various high profile individuals from the physicist Stephen Hawking to the tech entrepreneur Elon Musk have warned of the danger.
Which is why the field of artificial intelligence safety is emerging as an important discipline. Computer scientists have begun to analyze the unintended consequences of poorly designed AI systems, of AI systems created with faulty ethical frameworks or ones that do not share human values.
But there’s an important omission in this field, say independent researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky. “Nothing, to our knowledge, has been published on how to design a malevolent machine,” they say.
Today, Pistono and Yampolskiy attempt to put that right, at least in part, and the key point they make is that a malevolent AI is most likely to emerge only in certain environments. So they have set out the conditions in which a malevolent AI system could emerge. And their conclusions will make for uncomfortable reading for one or two companies.
The development of open-source artificial intelligence is less well advanced. But it has recently begun to gather momentum, driven at least in part by the fears about commercial rivals.
The highest profile of these efforts is OpenAI, a nonprofit artificial intelligence organization started in 2015 with the goal of advancing “digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
It is backed by pledges of up to $1 billion in funding from, among others, the tech entrepreneur Elon Musk, who has repeatedly warned of the dangers of AI. (Musk also part-funded the work of one of the authors of this study, Roman Yampolskiy.)
Computer security experts have long recognized that malicious software poses a significant threat to modern society. Many safety-critical applications—nuclear power stations, air traffic control, life support systems, and so on—are little more than a serious design flaw away from disaster. The situation is exacerbated by the designers of intentionally malicious software—viruses, Trojans, and the like—that hunt down and exploit these flaws.
To combat this, security experts have developed a powerful ecosystem that identifies flaws and fixes them before they can be exploited. They study malicious software and look for ways to neutralize it.
They also have a communications system for spreading this information within their community but not beyond. This allows any flaws to be corrected quickly before knowledge of them spreads.
But a similar system does not yet operate effectively in the world of AI research.
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.technologyreview.com/s/601519/how-to-create-a-malevolent-artificial-intelligence/
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
You should really post original content. Otherwise you are likely to run into trouble with @steemcleaners
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
i don't care.
i'm not using steemit for the same thing you are.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
my original content i will not throw in larimer's pit :)
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Congratulations @rhyzome! You received a personal award!
Click here to view your Board
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Congratulations @rhyzome! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit