Social Media Algorithms are Biased Against Conservatives

in news •  6 years ago 


Social media giants like Facebook, Twitter, and Google are under great scrutiny at the moment, due to the bias in their algorithms. They assert that they're "cracking down on fake news," but the results of their actions accomplish no such thing. Instead, independent journalists, conservative outlets, and smaller media groups find themselves losing 50-75% of their reach when Facebook or Twitter "updates" their algorithms. Companies like CNN and BuzzFeed may benefit, but the bias against conservatives and independent journalists only grows. They hide behind claims of "impartial" algorithms — but are the algorithms really impartial?

Many within these companies would tell you no. But after doing so, they often find themselves having to delete their posts explaining how their company's bias affects their systems and decisions. Speaking out about these companies' misdeeds can cost you your job, as we learned from James Damore.

The fact is, Twitter, Facebook, and Google all suppress conservative views and opinions under the guise of preventing abuse and spam. The algorithms they implement to suppress this content are designed by biased programmers, trained with data provided by biased employees, and reinforced by a culture that thinks they are doing what is best for everyone. The hubris is simply astonishing.

Speaking Up

In a now-deleted comment on Reddit, an employee of one of these companies explained how conservative views are censored. He stated that they have an algorithm designed to suppress abusive and spammy messages, but it doesn't actually do that.

To train the algorithm, the company uses a small selection of employees from their Bay Area office to evaluate content and decide whether or not it is abusive. The evaluators are given no clear instructions on what to consider abusive; provided instructions only tell the evaluators to "mark something as abusive if you think it doesn't belong on the platform." Due to the distinct lack of conservatives and Trump supporters throughout the Bay Area and Silicon Valley, these evaluators are undoubtedly biased, and the training data they help produce is therefore biased.

To the company's credit, the user writes, the company had spent time trying to determine whether or not their algorithms were biased. But determining bias is very difficult; there are many mutually-contradictory definitions of fairness when it comes to artificial intelligence. The researchers looking into the company's potentially-biased algorithms decided on "cross-group calibration," which would compare an algorithm's prediction of a user's abusiveness to the actual rate in which user's posts were abusive.

The outcome of this study showed that there was significant bias. If you are a liberal and are predicted to have a high probability of being an abuser, the actual data showed that your chance of being an abuser matched the predicted probability. If you are a conservative and are predicted to have a high probability of being an abuser, the actual data showed your chance of being an abuser was much lower than the predicted probability.

In other words, the algorithm used to suppress "abusive" content is biased to incorrectly give conservatives a much higher abuse score than their actual behaviour justifies. The researchers also spot-checked the artificial intelligence model by looking at popular content which was suppressed by the algorithm on a specific controversial political topic. More than 90% of the posts which were being suppressed were false-positives; completely innocuous content posted by conservative users was being determined to be abusive.

Officially Untrustworthy

The results of these inquiries into bias were suppressed, however. FAQs released inside the company distort the facts to assert that the algorithms aren't biased, and the researchers determined the algorithms to be "working as intended." If they are working as intended, then it is clearly the goal of these companies to suppress anyone who disagrees with their politics.

The comment doesn't only address the bias within algorithms; it also covers the bias inherent at all levels of employment within the company. Executives commonly have to address all their employees after controversial news about their company comes out, and with the topic of suppressing conservative views it was no different. But in doing so, they will lie to their own employees.

An executive at the company recently made some statements about the company not censoring conservative views. Many statements were not true. Some could only be true if you redefine words in ways that no one does. The best way to describe it is if a company came out with a statement saying that "we don't discriminate on the basis of race," but what they really do is discriminate on the basis of skin melanin.

I've talked to many people in the company, including some on the team involved with this, and they all agree that the official statements made were not true. This is especially bad since the company recently came up with new corporate principles and "trust" is one of them.

Company-wide Hostility

Companies like Google and Twitter are openly hostile to conservatives and Trump supporters; but what is lesser-known is that they are hostile to anyone who so much as disagrees with their policies. Employees often state that they've never even met a Trump supporter, and think that the military should start a coup to end his presidency. Hundreds have signed petitions demanding the banning of "Nazis" -- which the petition defined as anyone in the "alt-right." These employees do not see conservatives as a group with different ideas on how to improve their country; they see them as pure evil, and trying to "deny their right to exist."

The user that wrote the now-deleted comment fears speaking up against these biases, because he will lose his job in much the same way James Damore did. But this information needs to be made public. For all the fear of foreign entities influencing our elections, they are nothing compared to the dangers faced at home.

These social media giants are secretly suppressing conservative users, independent journalists, and more. The dangers of controlling the narrative like this limit the open discussion on important and sensitive topics, and pose a threat to elections. We cannot allow these companies to continue lying about the bias inherent in their algorithms, and hiding behind excuses. In the words of the user who wrote the now-deleted post, "democracy suffers when one of [these] sources suppress one side of a debate."

The original comment can be viewed here:

User rperryd's deleted post regarding algorithm bias


This article was originally published at: https://sevvie.ltd/politics/social-media-algorithms-biased-against-conservatives/
Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!
Sort Order:  

If Google's algorithm was allowed to vote in presidential elections, it would vote democrat every single time. Unbiased my ass...

Thanks for these insights.

Sad to say that I'm not surprised that actually censorship is systematically implemented in social media. But it's interesting to read how it's done.

Curated for #informationwar (by @openparadigm)

  • Our purpose is to encourage posts discussing Information War, Propaganda, Disinformation and other false narratives. We currently have over 7,500 Steem Power and 20+ people following the curation trail to support our mission.

  • Join our discord and chat with 250+ fellow Informationwar Activists.

  • Join our brand new reddit! and start sharing your Steemit posts directly to The_IW, via the share button on your Steemit post!!!

  • Connect with fellow Informationwar writers in our Roll Call! InformationWar - Leadership/Contributing Writers/Supporters: Roll Call

Ways you can help the @informationwar

  • Upvote this comment.
  • Delegate Steem Power. 25 SP 50 SP 100 SP
  • Join the curation trail here.
  • Tutorials on all ways to support us and useful resources here