There is an old story that's been retold in many forms and is credited to several different cultures. It tells of a Master teaching a pupil about two dogs (in some stories, it's two wolves) battling within us. One dog is Fear, the other is called Courage. The pupil asks the Master, "How will you know which dog will win the fight?" to which the Master replies "Which one will win? Whichever one you feed."
In some versions of this tale, the master is a Martial Artist, and so the connotation that encircles the story is that we're discussing fighting in both a mythological sense, and also in a reality. The fear that challenges to trip us up in a fight must be counteracted with a flood of thoughts and energy towards courage in order to overcome our oppressor and/or ourselves.
I feel that there's enough intuitive wisdom shared across time surrounding this idea that we could almost call it a "universal truth." And if this is a universal truth, then it can be applied to many different things. I have this attitude when it comes to topics such as "The Singularity." For those few readers who are not exactly sure what this is referring to, I suggest you watch a film called "Singularity" starring John Cusack, and you'll be brought right up to speed.
http://www.imdb.com/title/tt7312940/
A "singularity" is talked about almost as if it's a decided moment in future history. I would like to bust this idea wide open, or at least challenge the misconception that it is a single moment in time. Simply put, I think that in many ways the singularity has already happened, or IS happening right now. We have computers that can out-think gamers (more on that later), drive better, design physical structures for vehicles that are better in almost every way as in the performance chassis known as the 'Hack Rod' below:
So the Pivot is happening. A singularity that starts a war? Well that's quite possibly a long way off. We are driving the singularity, and we need to be careful how we do that. Too much focus on fear of annihilation, and we'll likely kill ourselves trying to beef up our defenses and start something we can't finish.
One could be overly optimistic, overly pessimistic, or somewhere between the two. I suggest we stay moderate on this topic. Being overly optimistic or even worshipful of AI could open some nasty doors that we'd probably want to stay away from.
There's the rose-colored glasses crowd. The one's who are certain that every technological advancement is better in almost every way than organic, natural processes, and may even look down on us like a god one day. You know them, they're the Hype-men who have that starry glaze to their eyes as they fawn over Sophia talking about one day having 'children,' and start up churches in order to head off decimation by opting for the role of lapdog instead.
Then there's the bunker building preppers who are convinced that an AI robot army can only be used to accomplish one thing: human extinction. I think we can agree that the majority of people who are familiar with the technology exist somewhere between these two extremes, the edge. The edge of the coin represents education, and wisdom. We'll call this intellectual position "cautiously optimistic." We know that Ai will likely help us find a cure for cancer, if it hasn't already done so by the time you read this. We can see AI being used to manage traffic systems in Pittsburgh which has already reduced travel time on the busy city streets and reduced carbon emissions by 21%. That is, no doubt, incredible.
These are just a couple of examples of AI doing great things for the people. I'm hopeful that if we continue to solve problems, then AI itself will develop a culture of problem solving and begin to share the responsibility to improve the world around us in a shared vision. Elon Musk has already formed Open AI, which is committed to developing and laying a path for "safe AI."
This is important. One thing that is kind of funny to me, however, is their success in teaching a software program to become the most efficient and lethal battle commander in human history. You can read all about the Dota2 software experiment here:
https://blog.openai.com/dota-2/
The reason I think this is a mistake on OpenAi's part is that I'm not completely faithful of AI's ability to think and know the difference between squashing opponents in a game, or in real life. So if there ever were a battle, we just lost it already if the Dota2 player is somehow plugged into a network of drones and AI tanks. But, again, they're advocating "safe AI." So whatever that means to them, I have at least a modicum of faith in Musk that he wants to live long enough to travel to Mars...and that will have to do for now.
Feel free to comment with your thoughts on the Singularity. Is it hype? Are we in it now? What do you think is the most exciting and most worrying advancements today? Thanks for reading.
Congratulations @wintersoldier! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
You published your First Post
You made your First Vote
You made your First Comment
You got a First Vote
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Congratulations @wintersoldier! You received a personal award!
Click here to view your Board of Honor
Do not miss the last post from @steemitboard:
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit
Congratulations @wintersoldier! You received a personal award!
You can view your badges on your Steem Board and compare to others on the Steem Ranking
Vote for @Steemitboard as a witness to get one more award and increased upvotes!
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit