Mind Control By Algorithm - Living In Polarised Silos Of Isolation

in mindcontrol •  7 years ago  (edited)

sky hole_720.jpg

Most of the paranoia surrounding AI is in regards to humanoid robots coming to life and killing us all. Either that or a SkyNet military command system working out that we're a waste of space and nuking the whole goddamn planet.

However there is a much bigger threat that AI poses, that almost nobody is paying attention to. The unique thing about this particular AI threat is that it is real, and it is happening right now as you read this.

The threat is the polarisation of the human race brought on by social algorithms. While this may or may not sound like a huge problem to you, the facts are there to be examined. After we analyse those statistics, they leave us in no doubt that we should be afraid, we should be very afraid.

Dawn Of The Algorithm

dawn_720.jpg

Algorithms have been in use for centuries, from ancient Greece to medieval England algorithms have been used to solve complex problems.

Put simply an algorithm is a set of instructions that can be applied to a number of problems in the same group.

For instance, I may have a box of coloured beads that I need you to sort out in terms of size. So at the beginning of the task I give you a set of instructions which you can happily call an algorithm.

The instructions may look like this:

In any particular handful (set) you analyse, consider the first bead you pick to be the largest and place it on the right of the display mat.

If the next bead you pick is bigger than the one you just laid down, place it to the right of the bead you just placed.

Stop when there are no more beads left to sort.

Or maybe I just want to sort out the yellow beads from the rest in which case I would give you an algorithm that looked more like this.

If bead is yellow place in box number one.
If bead is not yellow place in trash.
Stop when there are no more beads left in original box.

OK so that's basically it, an algorithm is a set of instructions that help because they codify a potentially large set of actions.

For instance if I didn't understand algorithms, then the first statement may have looked more like this:

I want you to take each bead and measure it with a ruler, then basically work out which is the biggest and what is the smallest and lay them out in order so I can see them progress in size.

The difference between the first statement and the second, is the resulting calculations (made by you) are much more difficult and time consuming. Whereas the algorithm frees you up to make certain assumptions about any particular set that you may be working on. This is the essence of the computer algorithm, it frees up the computer's memory to make more calculations at a faster pace.

So now we know what they are, perhaps now we can explore how algorithms, which in themselves do not posses any real intelligence, are running our lives by amplifying our beliefs and to some extent, dictating what those beliefs are.

The Smart Algo Era

wooden manakin_720.jpg

Have you ever wondered what influences and drives people with whom you completely disagree? For instance if you're a conservative, do you wonder why somebody should favour a liberal way of thinking over yours?

The ability to do that is crucial in human development, even if we disagree with each other, progress tends to be made when concessions are agreed upon and compromises on both sides are reached.

However in the era of the artificially intelligent algorithm, it is getting harder and harder to do this. What this means is that we are becoming more and more polarised as time goes on.

The difference between the rather crude and basic algorithm I presented above, and the AI algorithms of today is that they can learn. That is to say they compare feedback data with the computations they are making and learn.

Let's return for a second to the algorithm I wrote for you above, whereby you are tasked to separate all the yellow beads from a box. At first you are picking out beads one by one, however you then notice it is more efficient to pick out handfuls.

After a while you realise that it is even more efficient to pick out all the beads that are far away from the colour yellow and discard those. That leaves you with a bunch of yellow, orange,amber and red beads. You then notice that the orange beads are very close in colour to the yellow beads. So you decide that if I like yellow beads, there is a high probability that I will like orange beads.

So you make a decision to chuck all the orange beads into the box with the yellow ones. A few hours pass and I come back to check on your work. I see that you have decided to put in orange beads with the yellow ones, and I nod my approval.

That's enough for you, I have signified that I like orange beads because they are similar to yellow beads. So you now start to look for all the beads that are similar to orange beads.

Do you see the logic?

User likes yellow beads > may like orange beads > confirmed > user likes orange beads > may like red beads ....

This is a rather crude analogy for how huge learning algorithms, like the ones used by Google, Amazon and Facebook work.

The reasons these algorithms were implemented, was not for any nefarious reasons. More for the rather simplistic purpose of getting people to buy more stuff. Finding out that people who buy a particular brand of audio speaker, are also interested in certain types of music or other audio equipment is extremely valuable when you're trying to sell your products.

So, nothing sinister there then; all hunky dory, move on nothing to worry about here, just algorithms trying to find more and more semantic links between seemingly jumbled information . . .

Hmmm, well the thing is, we are now in the information age. Within capitalism, the prime mover was/is capital, otherwise referred to as money. Now however the prime mover is attention, money is still important of course, it is still the main way in which we transfer value from one place to another. Now though the corporate behemoths realise that our attention is what they need first in order to part us with our cash.

As Tony Montana once said in the seminal film Scarface

First you get the attention, then you get the power, then you get the women.

I may be paraphrasing there, but you get the point!

How Deep Are We In The Hole?

long dark tunnel_720.jpg

So our attention has become a currency that the corporations crave; how best to keep our attention?

Well if I'm Google or Facebook, then I just want to feed you stuff you like to read and watch. So I'm going to set an algorithm to learn stuff about what you like, and then cross reference that with what everyone else likes and see if it comes up with a pattern.

Mmmkay, that's fine, ah, but wait, the algorithm will do this without emotion or impunity. It simply has a task to do, a never ending task of finding content that keeps you glued to that site. It does not know the difference between a story about a cat, and one about a presidential candidate, all it sees, is you like them both.

When it comes to politics it will feed you what you like and discard what you don't. This is fine if all you're trying to do is gather up a bunch of beads. However when you're thinking about political issues it is not so healthy to completely shut out opposing views. It is much better to be aware of them so that hopefully the competing ideologies of both you and them, can meet somewhere in the middle.

So back to our bead sorting mission, you have decided that I like orange beads. This has lead you to make other choices based on my response. At no point do you even give me a chance to see a green or blue bead as they are too far away from my colour of choice.

When this principle is applied to politics we are pushed further and further away from the center, and thus the distance between us grows.

According to the Pew Research Centre the gap between Republicans and Democrats in the United States is growing. Put into simple terms, there are less and less issues that both sides agree on.

You may think that that fact is just a natural political by-product, of course Republicans and Democrats don't see eye to eye, that's why they're in different political parties. However when you really drill down to what that means, it is that there are more people within each party that are moving away from the centre.

This is the essence of polarisation, groups of people sat at opposite ends of the spectrum unable to come to any kind of agreement by consensus.

This leaves less and less people in the middle, the people who realise that in order to get things done politically, there needs to be compromise.

Compromise in American politics is very important, at a very simple level if a Senator wants to get a bill passed, then he or she needs support from both sides. Without an ethos of compromise this is impossible.

It is important to take a moment and reflect on the fact that this polarisation has accelerated since 2011; around the time that artificially intelligent, learning algorithms were first developed.

Note:

I know there is a lot more to American politics than this, however for the purposes of this essay I shall stick to this oversimplified model.

Stuck In An Algorithmic Loop

Coloured beads_1_720.jpg

So with all that being said; what part are Google, Facebook, et al. playing in this political malaise we find ourselves in?

To answer that question, let us look again at our simple algorithm and task.

Imagine that another person comes to you and says that she would like you to sort all of the green beads out because those are her favourites.

Starting from the point of green, you use the same algorithm that I gave you in conjunction with the extra knowledge you've gained through my feedback. So you start sorting green beads and all the beads that are similar to green.

After a few weeks of sorting beads for the two of us (this is an infinite box of beads), I am actually left with more orange beads, as I have decided that yellow is a bit wishy-washy and orange is the kind of colour I can really identify with.

My counterpart on the other hand, is left with more aqua beads, as she feels that aqua is a colour that truly represents who she is.

We both give you feedback; I say to you that whilst I still like the yellow beads, I absolutely love the orange ones.

The second person says to you, that she still likes green, however she totally adores the aqua beads you've chosen for her.

You process this information, and decide to give me more beads that are similar to orange, which moves me to red. You also decide to give my counterpart more beads that are similar to aqua, which moves her over to blue.

Neither of us have said we don't want yellow or green beads anymore, however we are not complaining at seeing less of those colours, because we really love the new ones you're choosing for us.

david-streit-228657_720.jpg

Suddenly there's a rush on yellow and green beads and thousands of people a day are coming to you to sort beads for them (you have an amazing capacity for work without fatigue). You use what you have learned from the two original bead collectors and you apply it to everyone else.

You find that if you avoid the ones that people are not so keen on, or even sort of like, customer satisfaction is much higher. This reinforces to you that you should stick to giving people beads within their preferred spectrum, which in this case is either yellow, or green. However you have learned that people who like yellow beads really love orange ones and in a lot of cases that leads to red.

You have also learned that people who like green beads, really dig aqua beads, which in many cases leads to blue. So you start to make assumptions based on that data, and you start off by giving the yellow lovers orange, and the green lovers aqua. The positive reinforcement you get from your customers pushes you further and further into a feedback loop.

What do you think would happen if your customers grew from thousands, to millions, to billions?

Exactly, soon there would be hardly anyone left in the world who got to even see yellow or green beads. In fact, soon the only beads you handed out to people would be red and blue.

This is a rather clumsy analogy for what is happening to us right now. Artificially intelligent algorithms that are designed to learn from our behaviour, are making decisions as to what coloured beads we should receive.

That is fine as long as the beads represent products and services we might want to buy. Or entertainment we may want to watch. However when the beads represent our political views, this is not so fine.

Post Algorithmic Democracy

Colour-band-Spectrum_720.jpg

So where do we go from here? These algorithms are here to stay. Facebook made 11 Billion dollars from advertising last year, this was in large part because of the algorithm they use, which advertise utilise when promoting their products.

We can wail and scream, shout and lament, but that will do no good whatsoever. We have to take back control of what we view on the internet.

Both you and I, have to go back to the box of coloured beads and say; *great, I love these beads you've chosen for me, however now I'm going to take a look at what's left in the box.

How does this look in real life?

It means seeking out news sites with a bias that is opposite to your own. It means making searches with anonymous browsers and search engines other than Google. It means teaching your children the meaning of impartiality.

There is growing awareness of the way learning algorithms are polarising our opinions. Join that group of awareness, and help to quicken this knowledge throughout human society.

If we do this the algorithms will remain useful tools that we use. Not our masters that control our minds and dictate how we behave, who we talk to, what we like and don't like.

This is a human vs AI fight that we cannot afford to lose, for if we do, we may lose forever the cooperative spirit, that made the creation of these algorithms possible in the first place . . .

Further reading:

Political Polarization In The American Public: Pew Research Centre

WHAT DO YOU THINK STEEMIANS? HAVE YOU NOTICED THE CONTROL THAT AI LEARNING ALGORITHMS EXERCISE OVER YOUR LIFE? DO YOU REGULARLY LOOK FOR OPINIONS THAT DON'T AGREE WITH YOURS; OR ARE YOU RETREATING TO EVERMORE DISTANT SILOS SURROUNDING YOURSELF WITH EITHER RED OR BLUE BEADS OF OPINION?

AS EVER, LET ME KNOW BELOW!

Cryptogee

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

This is a fantastic post and the problem you've highlighted is very real! You stated it perfectly:

'This leaves less and less people in the middle, the people who realise that in order to get things done politically, there needs to be compromise'

I went to a big data meetup recently and the discussion was around the ethics behind companies using our browser cookies (HTTP cookies) to tailor advertisements and media articles to what we went to see/hear rather than seeing it from various points of view.

I completely agree with you that it is worrisome because the Internet has made it a lot easier to become an individual nowadays. We are no longer confined to our local community so in essence, we are able to become who we want, and really learn about all different aspects from a number of different standpoints.

However, like you said, due to AI and the big tech companies exploiation of consumer behaviour, this individualism is going to become rather scarce if nothing is done to prevent it!

Once again, your article was an awesome read @cryptogee :)

great post @cryptogee

We've obviously got a speed reader here.

Cg

Maybe his/her/its algorithm says: (0) log onto "New" stream; (1) click on random article; (2) jump to the end; (3) type "great post", followed by @ and the user's name; (4) repeat step zero

Love the fruit loop comparison

Gotta love a fruit loop :-)

Cg

@cryptogee, very well written and informative updates. i thing something about robots technology. i dont know much about what you written, but i really impressed that you write veey well and you have natural creative skills.

Thank you for your kind words :-)

Cg

Congratulations @cryptogee! You have completed some achievement on Steemit and have been rewarded with new badge(s) :

Award for the number of posts published

Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here

If you no longer want to receive notifications, reply to this comment with the word STOP

By upvoting this notification, you can help all Steemit users. Learn how here!

Great work

Great work @cryptogee. In my opinion the biggest problem in the chain there is our chosen ignorance out of convenience. We are more convinced of what the mass already thinks (and google, Facebook et al. automatically represent the majority of humans) than of what our own thinking progress can produce. Hey why should I reinvent the wheel, a solution is already there, all I need to do is stick with red or blue. Done. Again the real issue is not our smart tools but our questionable consumer behaviour.. thanks a lot for your inspiring blog.

I have ALWAYS HATED feeling like a pawn to these algorithms!! I have an extremely sensitive radar for detecting when I'm being controlled so intuitively have literally never clicked on an ad online. But as you say, there's more to it than ads, it's also about options presented when searching and the only way around it is to hunt deeper. Time consuming but I agree that polarization is downright dangerous - politically, culturally, economically, and socially.

You've done a great job explaining this @cryptogee, and as always I admire your agile and sophisticated thinking!

Wow amazing

  ·  7 years ago (edited)

Fascinating article @cryptogee! Definitely a major issue here, but also likely a problem that is inevitable and a part of humanity's growing pains. I'm very curious to how we will deal with this impending issue moving forward.

This article made me think of Cambridge Analytica. The company that was responsible for the success of Trump's social marketing campaign:

Very perceptive and good food for thought.

It is quite obvious when search engines deliberately try to steer users away from websites with "disapproved" POVs: "Debunker" websites are positioned at the top of the results list, the result of algorithm tinkering.