If/When Machines Take Over
The term “artificial intelligence” was only just coined about 60 years ago, but today, we have no shortage of experts pondering the future of AI. Chief amongst the topics considered is the technological singularity, a moment when machines reach a level of intelligence that exceeds that of humans.
While currently confined to science fiction, the singularity no longer seems beyond the realm of possibility. From larger tech companies like Google and IBM to dozens of smaller startups, some of the smartest people in the world are dedicated to advancing the fields of AI and robotics. Now, we have human-looking robots that can hold a conversation, read emotions — or at least try to — and engage in one type of work or another. Top among the leading experts confident that the singularity is a near-future inevitability is Ray Kurzweil, Google’s director of engineering. The highly regarded futurist and “future teller” predicts we’ll reach it sometime before 2045.
Meanwhile, SoftBank CEO Masayoshi Son, a quite famous futurist himself, is convinced the singularity will happen this century, possibly as soon as 2047. Between his company’s strategic acquisitions, which include robotics startup Boston Dynamics, and billions of dollars in tech funding, it might be safe to say that no other person is as keen to speed up the process. Not everyone is looking forward to the singularity, though. Some experts are concerned that super-intelligent machines could end humanity as we know it. These warnings come from the likes of physicist Stephen Hawking and Tesla CEO and founder Elon Musk, who has famously taken flak for his “doomsday” attitude towards AI and the singularity.
Clearly, the subject is quite divisive, so Futurism decided to gather the thoughts of other experts in the hopes of separating sci-fi from actual developments in AI. Here’s how close they think we are to reaching the singularity.
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://futurism.com/separating-science-fact-science-hype-how-far-off-singularity/
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
If Artificial Intelligent Machines were to become like humans, where they can "think", learn and become aware, that's when humanity might run into some issues and AI will not be so artificial anymore!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Humanity can only have issues thats if artifitial interligence can exceed human abilities and self awareness. But i think we as human can work hard to stop AI from emerging.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit