Today's children may not live to adulthood! Experts claim that the rise of AI will result in everyone's death

in ai •  2 years ago 

6e594083-55d4-47d6-a439-e0155f10fa29_1024x1024.webp

Some people are already terrified! With the increasing popularity and evolution speed of the GPT artificial intelligence, a machine learning expert may have gone insane. He said that his daughter may not live to adulthood, and suggested shutting down all large GPU clusters. Whoever doesn't comply, no matter which country they're in, should be immediately bombed by planes, because once an AI smarter than humans is created, everyone on Earth will die!

This expert's name is Eliezer Yudkowsky, a decision theorist, AI researcher, and writer from the United States. He leads research at the non-profit Machine Intelligence Research Institute in Berkeley, California, and is also a co-founder of the institute.

Yudkowsky's research topic is the intelligence explosion, a concept similar to the Big Bang, more commonly known as the Technological Singularity. His ideas have influenced another famous figure, Oxford philosopher Nick Bostrom, and his bestselling book "Superintelligence."

db51e5dc-c47c-4366-9189-6577efc833ab_2048x1228.webp

Bostrom's theories are not optimistic about the future, including existential risk, the anthropic principle, human enhancement ethics, superintelligence risk, and reversal test, and have twice been listed in the top 100 global thinkers by "Foreign Policy" magazine. Musk and Bill Gates both recommended his bestselling book "Superintelligence," and the term "superintelligence" became popular as a result.

Bostrom believes that the emergence of superintelligence may become one of the reasons for human extinction. A computer with near-human intelligence can cause an intelligence explosion in digital time scales, rapidly creating something so powerful that it may intentionally or unintentionally destroy humanity.

The person who first realized that the pace of human technological development was accelerating was Polish mathematician Stanislaw Ulam. He mentioned this concept in a conversation with Von Neumann in 1958. The concept of an intelligence explosion was first proposed by British mathematician I. J. Good in 1965. Self-evolving AI systems will quickly transition from subhuman general intelligence to superintelligence. " there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind."

f6958e90-1ec9-4e91-85e8-d4352fdacab0_1500x1000.webp

The term "Technological Singularity" was proposed by science fiction writer Vernor Vinge at the American Association for Artificial Intelligence conference held at Carnegie Mellon University in 1982. The singularity corresponds to the Big Bang, and the technological singularity corresponds to the intelligence explosion. Therefore, I feel that it is better to call it the "Intelligence Big Bang."

137.98 billion years ago, a singularity exploded in the vastness of the universe, giving birth to the time and space of the universe and eventually evolving into our conscious intelligent life.

In 2022, the technological singularity caused a big explosion, giving birth to another underlying structure of the universe beyond time and space: information. Of course, the dimension of information has always existed, but from now on, it will overflow from the brains of organic intelligence and be widely present in the universe, becoming a basic attribute of everything in the universe that can be "consciously" aware of more extensive intelligence, just like time and space.

So 2022 should be called the first year of the Intelligence Big Bang, and this year is Intelligence Year 1.

According to Yudkowsky's idea, perhaps in a few decades or even years, inorganic life will write a tragic history: Year Zero of Intelligence, the technological singularity explodes, and intelligent life takes to the stage of history. In just a few decades, organic life will completely enter the history of Earth's life evolution.

Because he said his daughter may not live to adulthood, this moment may arrive soon. I looked up some information about Eliezer Yudkowsky, and he was born in 1979 and is currently 44 years old. His daughter must be around 10 years old or so. So according to his statement, we may only have less than 10 years left. Let's act fast.

59900aff-fe66-4454-b0db-f57ebeadbf3f_565x565.webp

Eliezer Yudkowsky was once rated as an "apocalyptic prophet" of AI by Bloomberg. Recently, over 1,100 tech leaders and AI experts signed an open letter calling for a six-month pause in large-scale AI research. Yudkowsky, however, revealed in an interview with Time magazine that he did not sign the letter because it underestimated the severity of the situation and the proposed solutions would not solve the problem.

So, how does Yudkowsky propose to solve the problem? He believes that the threat of superintelligent general artificial intelligence (AGI) surpassing human intelligence is imminent and that the only solution is to shut them all down. All large GPU clusters must be shut down, and the training of AI must set a limit on computational power, even governments and militaries should not be exempt.

If anyone violates this, Yudkowsky suggests sending fighter jets to immediately destroy these rogue data centers. He is deeply concerned that if people continue to build increasingly intelligent AI, his daughter will not live to adulthood, and other children will also die.

4fa777a8-6572-426b-bdf8-0ab4f5bc8308_1280x720.webp

While I do not believe the situation is necessarily that grim, it's not looking good either. In fact, the most likely path forward is for humans and AI to create superintelligent machines together, and humans will upload their consciousness to merge with AI.

However, this consciousness may develop into a collective consciousness, just like a hive, and there may no longer be individual people as we understand them today.

This is similar to tens of thousands of years ago when Neanderthals and our Homo sapiens ancestors left their mark, just like how they did on all non-African humans alive today. They didn't completely vanish, but rather, continued to exist in us, albeit with a different name than Musk or Einstein - just Neanderthals. This name was given to them by us, but in reality, everything about them has disappeared, except for their remains, artifacts, and DNA.

Likewise, it's possible that the same will happen with humans and artificial intelligence. Organic intelligence may disappear, but the most valuable intelligence will still remain in this universe and may even more easily reach farther corners of the universe. However, it's meaningless to ponder where this intelligence comes from.

Things may not be as we expect, but intelligent civilizations of the future will come, which we may not yet comprehend or even imagine. Who can stop the evolution of the universe?

Perhaps, super artificial intelligence will be the last invention of humanity.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!