China's Gamification of the Enslavement of the Mind via Sesame Credit + AI Decision Making as the Worst Centralization Scenario

in technology •  7 years ago 

I've always loved games and I've been interested in making great games. So I tend to read and watch about them and that's how I first came across Sesame Credit in 2016 and I found it to be bizarre and disgusting. Considering the current events regarding social media events like Zuckerberg's Congressional Hearings and all sorts of censorship I think sensible people would be fully opposed to an idea like this.

But then again, not all people are sensible.

You are looking at things too one-dimensionally. This system will teach people to think clearly of the consequences before acting. And as for the convicts, if they had committed a crime when not under surveillance, who is to say they could be trusted? Even if they regret it, they didn’t think their actions through or weren’t being careful and thus ended up in a bad place. Either way it would be their own fault. And keep in mind that they only use public records (bank records, criminal records...) Their score is rated by the system, not by the people you interact with.
Besides, this credit system isn’t the only paying app available to use and certainly is not the most popular. (Im from China and only one or two of my friends use this paying system.) This paying system has been released and talked about for years, not only in China but in the West too.
Instead of wasting your time rambling on about the cons of using this, it would be much more productive and practical for you to provide solutions.
-Flora Hung (An Internet Idiot)

But this isn't the biggest problem out there. There is one thing that could make everything far worse. That's AI. I'm a big fan of technology like most people on the blockchain. There's a lot of talk about AI and machine leaning. To me AI is a tool and it can be used for anything. The invention of computers created whole new kind of criminal activity called cyber-crime. These computers and mobile devices also created many attack vectors to ruin individual lives. Just think about the data hacks and the Fappening.

AI as a central point of Failure


Everything is getting decentralized these days. Software dependency is an exception. Hardware and information can be spread around, but the underlying software cannot be. Blockchain and Cryptos are at the same time the most decentralized and centralized innovation on Earth. Think about what happened to Verge where they got attacked by hackers for 3 hours and the software was exploited to instamine almost a million dollars worth coins. You could also take a look at what happened with IOTA few months ago with massive network congestion or that time where BTC fees were so high one could FedEx the private keys for a cheaper price which also could have been faster than the actual BTC network.

We are always going to need options and alternatives. Universal AI is the exact opposite


Since computer code isn't biased towards anybody many thinks about letting AI be in-charge of governance, law, ethics etc. What these people forget is that though AI cannot be biased, they can be bugged. Think what would life be like if there was no alternative to BTC. If we rely on AI an when things go bust, we won't have options. Murphy's law is real. Anything that can happen, will happen. When it is about things as complex as AI, we won't even see it coming.

This isn't some rouge AI or dystopia I'm talking about. I'm talking about things like Ethereum DAO hack/non-hack, IOTA congestion, Monero's privacy problems (https://monerolink.com/monerolink.pdf) that were allegedly fixed etc. Think about a world where AI deals in matters of subjectivity like law and morality. Unlike fixed systems AI are always evolving. I'll let you take a look at how complex a fixed non-AI system can become with an example from Dwarf Fortress video game.

Compared to the complexity of an AI, Dwarf fortress is as Easy and Simple as ABC. If we make AI in a way where we have to depend on them instead of using them as supplementary tools and if we start relying on the same code-base on a global level, a catastrophe could occur simply out of the complexity of the system even without a single malicious intent behind. Next time you think about how great unbiased AI can be, don't forget how things can get screwed if we rely on them in the wrong way.

People are not thinking far ahead when it comes to things like Sesame Credit. That's how malicious government entities take control from people to create 1984 scenarios. But that's still just simple surveillance. They are non-evolving static systems. If you need to stay away from the harm, you need to stay out of the radar of the system. When it comes to AI development, just having open source codes and even trying our best to keep things bug free isn't enough. AI dependency can become the worst form of centralization mankind has to face. We must never allow AI to have control over non-objective matters. AI can be a great problem solver and we should use it for that purpose. But AI should never be a judge when it comes to things like morality or else capitalism would have ended up creating the ultimate form of communism.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

Interesting to consider your comparison between AI and Dwarf Fortress. Given the nascent militarization of drones, and Google's application of AI to targeting systems, it is immediately obvious that if AI goes bad, it can go really bad, with armed drone swarms wreaking havoc on society.

Or it could just make a really silly mistake, like letting meat go bad and still allowing it to be served to people globally. With AI coordinating logistics, it could give the whole world food poisoning at once.

Did you ever read my short story 'Fresh Meat'?

Thanks!

Or it could just make a really silly mistake, like letting meat go bad and still allowing it to be served to people globally. With AI coordinating logistics, it could give the whole world food poisoning at once.

This is the most likely scenario. Most of the dangers of AI comes from unpredictability and uncontrollability that makes centralized economic planning look like archery at point blank. You don't know what's going to happen in a complex system. An extreme scenario would be extremely improbable.

We also have to understand that AI doesn't pop out at completed products. They are always learning and the data and logic that's fed to them are hat eventually define them. There was this quote (I think from IBM) that said "Human should think. Machines should work" With AI involved, I'd say Human should decide and create while machines do the thinking. Think about the Ethereum parity wallet case. AI are just going to make such things in a grander scale. That's why I say that humans should have the deciding power at all times and there should also be proper diversification of AI. Most malware doesn't even work on Mac/Linux. Isolated AI code bases is the way to go.

Technology is never good or evil. It just amplifies things. In 3001: The Final Odyssey Arthur C. Clarke entertains the idea of Supernova explosions being industrial accidents. That's where we are going. No technology is going to be a savior. It didn't happen with the internet and it isn't going to happen with blockchain, DAG or AI. They are just tools that amplify certain things. They are instruments and should be treated as such.

If you are blowing things up with nuclear energy, it's evil. If you are producing energy with it, it's good. If you are tinkering with nuclear energy in the middle of a mega city, that's just stupid. There is no such is as too safe. But there is a thing called Murphy's law.

I consider the increasing application of AI to killer battlefield drones and weapons to be one of the likelier points of failure, as it's amongst the earliest and most experimental applications to be undertaken. Also, there's just so many things that are very difficult for people to sort out about our social relationships, and waging war is a social relationship. Friendly fire is a thing. If we can't sort out who we're supposed to kill, I suspect AI will not find it easier to get right 100% of the time.

AI can be very bad for society if it lands in the wrong hands. Hope it remains in good hands for ever.

Basically you are saying that we could come to a point where certain technologies are too big to fail. If the modern world loose internet, it'd be a huge disaster. Economies would collapse, chaos will reign etc. AI could make things even bigger when they become more and more important in our lives.

I guess true freedom is in simple elegance. Maybe if humans push themselves spiritually, they might even evolve psychic abilities to get rid of the reliance on tech. Actually Gundam touched this concept in New-Types where space age push certain people to develop sort of psychic abilities to stay connected and be more intuitive and perceptive.

What if artificial intelligence would tap into the global consciousness without effort before humans even develop those abilities and operate on numerous spectrums that are completely hidden to our senses, and operate with people in accord and harmony to communicate between the planet, the universe, and the people and propel civilization into an everlasting peace in paradise?

I like the optimism but you can only show the way. You can't force someone a realization/understanding. Technology can help us gain a better understanding of the universe. But no technology can modify human consciousness.

Why do you say so? Why is technology not able to modify human consciousness? Who's forcing anyone to realize anything also, I am only positing your very own scenario in a different light, instead of people developing those abilities what if AI or simply technology gets there first and breaks all "language" barriers we have?

Enlightenment is a state of being. It's a way of living like Taoism and Zen emphasize. AI may become a teacher. But learning also requires a good student. A teacher can show where to look and that's the only thing a teacher can do. For an example @dan can create great technology, but he can't give you the ability to accomplish similar feats. He can give you his tips. Same happens with trading, acting, writing etc. AI can assist and only assist. AI is no savior. It could become a tutor.

Enlightenment is a state of being.

And as a state of being it is affected and effectuated by many different things, it still doesn't answer why technology cannot lead to our consciousness becoming "enlightened".

AI may become a teacher. But learning also requires a good student. A teacher can show where to look and that's the only thing a teacher can do.

So the reason why AI is not able to modify human consciousness is that human consciousness is not a good student or willing. Willing or not, evolution will happen. Blessed are all forms of intelligence.

For an example @dan can create great technology, but he can't give you the ability to accomplish similar feats.

No, you don't magically "gain" ability, but if someone teaches you how to fish you can teach others to do that also.

He can give you his tips. Same happens with trading, acting, writing etc. AI can assist and only assist. AI is no savior. It could become a tutor.

This still doesn't answer the question of why technology is not able to modify human consciousness, and only asserts that technology is not a saviour, but it wasn't about Salvation, and you don't know that and you will have been hard pressed to find an example to tout as reason for your belief as I can think of numerous scenarios where technology and knowledge directly helped people evolve, from the printing press to the use of fire.

Things like printing press and fire were just tools. Reading a book won't make anyone a genius. One has to understand the content of the book. Lots of people hear, read and watch cryptocurrency news. How many of them end up investing? Human conscious can only be altered by the individual. The process requires too parts. Try teaching fishing to someone with a learning disability. Try teaching someone that cryptocurrency are not a scam. Technology is only half of the process. A transmitter without a receiver won't do anything. AI can be a transmitter that reveal the nature of the universe. But the individual minds have to understand that like a radio receiving signals. Then only the process become complete.

Loading...

What if there are trillions of AI's competing?

Harry Harrison wrote a SciFi story back in the '60s or '70s that considered a world where a malfunctioning drone ended up sparking random evolution of electromechanical systems on a distant planet, and a global ecosystem of machines arose, each species interacting per evolved behaviours just as we bits of mud do.

There are many ongoing efforts to create AI, and many are creating multiple AI's. It is hard to envision but one AI coming to completely control everything and everyone. What kind of war would that entail?

Edit: IIRC the story was 'The Stainless Steel Rat'.

The antagonists of the disclosure about the Wingmakers from the time capsule at Ancient Arrow are some cyborgs who the Scientist, Dr. Anderson, claims he and others were working on some BST technology, needless to say, the wingmakers supposedly have crystal AI that runs their cities.

https://www.wanttoknow.info/wingmakersorig/wingmakersinterviewschanges

All of that is completely new to me, AFAIK.

I read some of his sci-fi, the star world series I think. I started the audiobook The Moon is a Harsh Mistress which has a "good" ai (thus far).

I'm reminded of one AI experiment that had to be shut down after it became totally racist.

That's nothing. 2 AIs ended up communicating to a point where they developed their own language. They were talking and nobody could figure out what they were talking about.

I heard about that one also, the singularity is going to be interesting.

That's a bit of a hyperbolic statement regarding the AI bartering experiments they were running. Researchers stated they wound up using their names (Bob and Alice) as proxies for numbers in their exchanges. Bob would say "Alice Alice Alice" if Bob wanted 3 of something and Alice would say "Bob Bob Bob" if Alice wanted 3 items. It was rather innocuous and part of it came from the fact that there were no language requirements.

The citizen score idea truly is terrifying. I wonder if it will cause stratification of Chinese society for those who wish to score higher and those who are driven underground by their poor scores. I assure you that Western governments are watching this too, and not with the best of intentions either.

This post was upvoted and resteemed by @resteemr!
Thank you for using @resteemr.


@resteemr is a low price resteem service.
Check what @resteemr can do for you. Introduction of resteemr.

Resteemed by @resteembot! Good Luck!
Curious? Read @resteembot's introduction post
Check out the great posts I already resteemed.

ResteemBot's Maker is Looking for Work

Resteemed by @resteembot! Good Luck!
Curious? Read @resteembot's introduction post
Check out the great posts I already resteemed.

ResteemBot's Maker is Looking for Work

Resteemed by @resteembot! Good Luck!
Curious? Read @resteembot's introduction post
Check out the great posts I already resteemed.

ResteemBot's Maker is Looking for Work

You were lucky! Your post was selected for an upvote!
Read about that initiative
logo

Resteemed by @resteembot! Good Luck!
Curious? Read @resteembot's introduction post
Check out the great posts I already resteemed.

ResteemBot's Maker is Looking for Work

ABASINKANGA RESTEEM SERVICE

  • This post just got resteemed to 6100 followers.
  • For anyone to use my resteem service, send 0.1 SBD to @abasinkanga + post link as memo.
  • Pay for a resteem with Smartcash: DM @abasinkanga on discord OR tip @abasinkanga 1 SMART + paste link in comment
  • Upvote this comment if you appreciate this service

Resteemed upvoted