This week, @justinsunsteemit made a splash on the Steem blockchain with his Twitter Post that said,
Some groundbreaking AI will be developed on Tron and @steemit. Stay tuned.
Pixabay license from geralt
After that initial post, he followed with two more posts about his general expectations for the future of "virtual organization structures". In those posts, he wrote the following (and also on LinkedIn):
The future of nations will primarily rely on virtual organizational structures, with elections conducted on blockchain through digital identities, ensuring 100% transparency and fairness. Populations will consist largely of naturally born humans, those born via surrogacy or biotechnological synthesis, and AI robots. Taxes and all economic activities will be facilitated through crypto. In the future, it will be difficult to determine whether the entity you interact with is a naturally born human, a human born via surrogacy, or an AI robot—or perhaps it won’t even matter anymore. The concept of family will cease to exist. The most powerful entity in the world will be the one that masters crypto, AI, robotics, and biotechnological synthesis. This entity may not even be a nation but rather a corporation, or possibly a decentralized protocol.
You need to ask yourself: Am I ready for all of this?
Because the very concept of “human” will be unprecedentedly challenged, the most powerful organization will be the first to abandon this concept, granting citizenship on a large scale to AI robots and biologically synthesized humans born via surrogacy. The population of this nation (or corporation) will quickly surpass 100 billion, rendering other countries on Earth irrelevant.
Of course, we're all interested in the implications of these X-posts for Steem and/or Steemit. We've already had an interesting discussion on what this all means for Steem/Steemit, and I invite you to join over there. In this post, I want to think about Sun's ideas more generally.
To do that, let's draw on the ideas of Richard Dawkins, Ray Kurzweil, David Chalmers, and Donald Hoffman.
Background: Genetics, Memetics, Futurism, and Consciousness
Genes and memes
In his 1976 book, The Selfish Gene, Richard Dawkins argued that gene replication is biology's fundamental unit of evolution, and that the history of biology can be understood as a history of increasingly sophisticated "survival machines" (virii, bacteria, plants, animals, humans, etc) for genes. These so-called machines provided genes with the opportunity to survive and replicate themselves. The most effective "survival machines" were repeated and improved upon. The least effective went extinct.
Importantly, much of the book was devoted to explaining altruistic behavior in evolutionary terms. Why do people (and animals) do things that seem altruistic? In this view, we're "programmed" to behave this way because we share our genes with others. We're more likely to sacrifice ourselves for our grandchild than our grandparent because children are more likely to pass on our genes. We're more likely to sacrifice ourselves for a brother than for a cousin because we share half our genes with a brother, but only a quarter with a cousin. Genealogically, the further we are from another person, the less likely we are to sacrifice for them because our genes have no evolutionary advantage from the survival of an unrelated organism.
Dawkins also introduced a new concept, "the meme". In this view, the meme can be seen as equivalent to the gene in an information context. Memes pass through evolution by replicating in their own "survival machines": minds, computers, and other forms of written and stored media, I suppose.
In this view, Beethoven's 5th Symphony might be a meme (or maybe just the 4 notes that everyone knows so well), and so might Plato's allegory of the cave. AFAIK, the specific "unit" of the meme is not well understood, but at a high level it just represents a piece of information that spreads, repeats, and mutates.
The Singularity
Moving on from Dawkins, another relevant thinker for this topic is Ray Kurzweil. In his 2005 book, The Singularity is Near, Kurzweil looks at the rate of technological progress throughout history and concludes that humans are just decades away from a time when human consciousness merges with machine intelligence and the combined entity is able to "live" forever.
Imagine a young man who is injured in war and receives prosthetic limbs and a Neuralink chip to provide mobility so that he continues to live a full and mobile life. Now imagine that he gets older and he starts receiving prosthetic organ replacements, one organ after another as organs fail and technology to replace them becomes available. How much of his biological self can this man replace and still be a man? According to Kurzweil, all of it. The man's essence - his consciousness - can survive past the end of his biological body. Kurzweil referred to this blending of biology and technology as "The Singularity".
The hard problem of consciousness
One of my favorite edge.org contributors was always David Chalmers. His "claim to fame" was that he described a concept known as the hard problem of consciousness. Many (most?) thinkers imagine consciousness as an emergent phenomenon, where our brains - neurons and synapses - give rise to consciousness. Chalmers noticed that there are two questions that arise from this belief:
- (the "easy" problem) How does it work? What are the physiological processes that cause consciousness to emerge from a wet blob of neurons and synapses; and
- (the "hard" problem) Why have consciousness at all? We (presumably) all have an "inner narrative" or "inner movie of the mind" that's providing a lens through which we view the world. Why is that sort of subjective view of reality more useful than a simpler mechanistic way of controlling our perceptions and behaviors? Robots can act without any sort of inner narrative, so why don't people?
The Reality Illusion
In Hoffman's book, The Case Against Reality, he argues that the mainstream thinkers have it all backwards. He suggests that mainstream thinkers have made almost no progress towards solving even the "easy" problem of consciousness, and the reason for that is that they took a wrong turn when they assumed that consciousness emerges from physiology.
Hoffman and his students have run computational experiments to test autonomous agents and perception. They found, invariably, that the agents that perceive the full reality of their virtual world always go extinct. They're too slow. Same story for agents that try to approximate reality to varying degrees. The agents that survive are the ones that create useful abstractions. In this view, our perceived reality is more like the abstract Windows interface that gives us access to operate and manipulate computers than it is like an approximation for actual reality. When we drag a file from a folder into the recycle bin in our Windows file explorer, we know nothing about what's happening inside the computer. Similarly, perceptions are useful, but there's no reason to believe that they're accurate representations of reality.
Provocatively, Hoffman argues that if you're not there to perceive them, then the apple that you left in your kitchen and the car that you left in your garage aren't really there at the moment. Don't worry, though, they'll be back when you're ready to perceive them.
Further, Hoffman argues that reality is not based in space-time at all, but rather that consciousness exists at a deeper layer of reality and that our perceived space-time is just an abstraction that consciousness uses to manipulate this deeper reality.
Sun's view on the future of AI
So now, with all that as background, where does Sun's description fit into these frameworks? At first, he says that nations will rely on virtual organizations, then later he suggests that corporations and even decentralized protocols may become the new power centers. Certainly, the first part is uncontroversial, and the second part is plausible. Though, I think nations - with their monopolies on the legal use of force - will probably always have the advantage.
When he suggests that organizations who combine crypto (micropayments), AI, robotics, and biotechnology will be the ones who win, I think that's also plausible... and even likely.
But moving on from there, his vision of the future depends pretty heavily on a certain understanding for the nature of biology, information, and consciousness.
What he's describing - when he says that the concept of "human" may not even matter - is the full replacement of genes by memes. It's Kurzweil's Singularity.
Buried in that prediction are the assumptions that consciousness can be relocated from biological organism to technological entity, and that the essential part of what it means to be human emerges from the physical reality and can be reproduced outside the brain. Also buried in that prediction is the likely eventual extinction of humans as a biological entity.
If Sun is right that family, as a concept, will stop existing, my understanding of Dawkins would suggest that this would usher in an end to altruism, and a corresponding deactivation of evolution's ability to adapt and improve.
Personally, I'm skeptical of Kurzweil's claim that human consciousness can be transplanted to machines, and I'm even open to Hoffman's idea that consciousness operates at a deeper layer of reality than our perceptions of space-time (though I have a really hard time with the idea that my car only exists when I'm looking at it).
So, in the end, I guess I agree with Sun's ideas for the future insofar as mastery of AI, robotics, micropayments, and biotechnology is a key to future success of organizations, but I'm skeptical about the philosophical and metaphysical implications of the future that he describes.
One thing that I fully agree with is this:
You need to ask yourself: Am I ready for all of this?
What are your thoughts about Sun's vision for the future?
Thank you for your time and attention.
As a general rule, I up-vote comments that demonstrate "proof of reading".
Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.
Pixabay license, source
Reminder
Visit the /promoted page and #burnsteem25 to support the inflation-fighters who are helping to enable decentralized regulation of Steem token supply growth.
Wow, there’s some degree level philosophy going on there!
I had this conversation about a decade ago. We’re at a point where every moment of a person’s life can be captured on camera. We have the capacity to store this knowledge and AI has the capacity to learn and understand it.
Piece this all together and after somebody’s died, “they” can exist as an entity that you can communicate with and that has learnt to reply as they would have done. Does this make the AI them? Certainly not. But add in robotics and synthetics (like Blade Runner) and what do you end up with?
When AI communicates with AI, why wouldn’t this alternative form take our place? They don’t need an increasingly scarce resource (food) so they have an advantage.
Are you ready? No. None of us really are.
I’ll be back.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Yeah, this is where the easy and hard questions of consciousness come in. Eventually, I have little doubt that we can simulate consciousness in machines to any desired level of approximation - as seen from the outside. I seriously doubt if machines will ever experience the "inner movie of the mind", though.
This is why I always focus on the question of property rights.
They don't need food, but they do need energy. And energy needs to be paid for. As long as humans are the ones who own the property, we have control over the AI's resource consumption. As soon as we let AIs control our access to food and buy or sell energy without permission from humans, we're asking for resource shortages and other problems.
Of course, if AIs do reach true sentience, denying them ownership rights is a tricky ethical topic. IMO, human survival comes first, but I'm sure the AIs would disagree...
I totally agree with this. All we have is OODA and crossing our fingers🤞.😉
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
The future of humanity in relation to the dominance of artificial intelligence and robotics, I believe is imminent, as governments and nations of the world will measure their power based on their technological development in artificial intelligence and robotics, other implications that this may have are aspects that must be constantly evaluated to see their performance in the future.
Excellent post, I think I had not seen one like this in times, with a lot of analysis and interpretations, thanks for providing us with such a deep reflection.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
I'm glad you enjoyed it. Thank you for reading, and thanks for the feedback! I definitely agree that these technologies will be heavily prioritized by governments and nations.
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Justin Sun's vision of a future where the concept of "human" is transcended by AI, robotics, and biotechnology is fascinating and unsettling. While the merging of these technologies seems inevitable, will this lead to the loss of humanity’s core values like altruism and identity?
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
I haven't dug into this topic in a ton of depth myself, but to me it seems like it's making a big assumption that you can create a simpler non-conscious thing that does what conscious things do. We have no way of knowing for sure whether other things have subjective experiences or what they're like. Maybe anything that has a complex-enough internal model to be able to successfully interact with us will have the equivalent of an inner narrative. Maybe the way we humans do it is actually the simplest way.
I don't think that's right. The reason for something arising originally doesn't have to be the reason it continues existing. Evolution frequently grabs things that are "lying around" and uses them in other systems (e.g. whale flippers based on terrestrial legs). Some of our "nice" behaviors may have arisen for kinship reasons, but if they're good and self-sustaining on their own then taking away the starter won't make them stop.
I think it raises some interesting questions. I'm not sure I buy bio-technical synthesis as being that important. I think the question of whether you can ship-of-Theseus swap out a consciousness's substrate is sort of interesting from a philosophical perspective, but I'm not sure it will have much practical relevance. I think the more interesting question is whether AIs (of whatever form) can do the kinds of things that we can do that make us believe other people are as conscious as we are. I don't see any reason why that should be impossible, but it's an open question whether the current LLM paradigm is on its way there or if other elements will be necessary. And from the perspective of AIs potentially being different kinds of things, it's not obvious if "number of agents" will be an especially important number in the future -- it has historically been a good heuristic with humans because we have some intuitive sense of how multiple humans "add up", but entities that don't operate the same way may raise some interesting questions. Like, would it actually be valuable to have a country with 100 billion identical entities, or is something like the diversity of ways of thinking (that we tend to get "for free" with the way that each human tends to have unique genes and a unique experience of developing in the world) important?
I think there's certainly a possibility for catastrophe at the hands of AI expanding out of control. But there's also potentially good things. Maybe entities that are more native to a world of information won't have the same scarcity concerns that we do. Maybe they'll understand things in a different way than we do in a way that ends up being beneficial to everyone. Maybe blockchain-native entities would be naturally inclined to distributed mutually-beneficial arrangements with other entities rather than basing things on physical-world power and status (I'm kind of skeptical on that one, I think the distributed nature of blockchains is more aspirational than real with the current models, and entities that optimize themselves to thrive in the current crypto ecosystem may have some rather unpleasant traits).
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit