If you type in words like “spider-man”, “elsa”, “hulk” or even “superheroes” into YouTube, it’s safe to assume that the majority of content the video site’s algorithm serves up is probably decent. Give the algorithm too much room to breathe, however, and you might fall down a disturbing, sometimes traumatizing hole of YouTube’s darker content, created by trolls and pranksters, but geared toward millions of children.
A critical essay from writer James Bridle and a scathing report in The New York Times that investigated how much questionable and traumatizing child-oriented programming is on YouTube brought those issues to light this week. In his Medium post, Bridle used various examples to prove his point in a lengthy and revelatory piece about how YouTube’s system is failing millions of children who watch content their parents assume to be appropriate. Bridle’s essay is worth reading, but here’s a snippet of the main issue he has with YouTube’s system.
To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.
This, I think, is my point: The system is complicit in the abuse.
Bridle focuses on kids programming, but the issue with YouTube has always been its algorithm. The technology built to create filtering systems competent enough to keep this type of harmful content at bay doesn’t work. It hasn’t worked for a long time. YouTube is aware of this.
The most contentious discussion surrounding YouTube, by both media critics and those in the community, is YouTube’s algorithm. Doctored news from unreliable sources have become the most watched videos on YouTube following mass shootings or serious news events as creators learn to scam the algorithm. When YouTube tried to fix this, the Wall Street Journal pointed out that bad actors had learned to manipulate the “up next” algorithm to draw attention back to videos that promoted false narratives. YouTube later told the newspaper it was looking into ways to fix that, too.
On Nov. 1, Louisiana Sen. John Kennedy asked Richard Salgado, Google's director of law enforcement and information security, if the company considered itself to be a news outlet or “neutral tech platform.” Salgado said Google viewed itself as a tech company, to which Kennedy jested in reply, “that’s what I thought you’d say.”
Kennedy grilled Salgado during a hearing to examine Russian tampering in the 2016 election, but the conversation goes far beyond that, as does Kennedy’s question. If Salgado said that Google considers itself a media company, it would be responsible for the content that appears on its network. If Google is just a tech platform, however, it’s not. So long as Google isn’t a definable media company, then it can’t claim ownership of anything that appears on any of its sites.
The problem is that Google is a media company and the dangers associated with operating under this umbrella can be seen clear as day on one of its biggest entities, YouTube.
YouTube is facing the same problems as Twitter and Facebook; content management and lack of oversight. They don’t want to be responsible for the dark forces using their platforms to spread harmful messages or disturbing content. They’re also too big to be able to vet every single tweet, Facebook post or video.
In an interview with Recode’s Kara Swisher, YouTube CEO Susan Wojcicki said Google was an information company. Wojcicki said Google’s mission is “organize the world’s information,” and video is a really important type of information.” But if it’s the company’s mission to organize information, should it not be responsible for the misinformation that doesn’t do anything to serve that audience? Misinformation that is more harmful than anything else?
Wojcicki wants to do better, but as she told a group of journalists and advertisers in New York City earlier this year, it’s hard to do that because of YouTube’s size. Here are some of the numbers Wojcicki relayed to potential advertisers, as reported by The Hollywood Reporter.
75 percent more channels with over one million subscribers than there were last year, and that watch time of TV channels on YouTube has grown by 50 percent.
Wojcicki apologized for letting people down, promising “we can, and we will, do better,” but that doesn’t mean an extensive increase to the moderation team, it just meant adjusting the algorithm. For a while, that meant demonetizing millions of videos, revoking certain creators for including external links in end slates and other measures to try and clean up the website for advertisers. But this didn’t do anything to fix the problem that’s been plaguing the site.
The algorithm has been ripped apart, analyzed and beaten up by those who want to ensure their videos are seen. In the worst of cases, these are videos that promote hateful content and spread misinformation right under YouTube’s nose. In other cases, which may not appear to be as big of a problem but are indicative of the site’s issues at large, that means children trying to watch a gentle video of a unicorn may stumble upon a 4chan joke that features the same unicorn getting its head chopped off or adding in porn that flies under the algorithm’s radar.
YouTube’s executives have put rules in to try and prevent this type of behavior. Certain entities, like Marvel’s superheroes, Disney’s Mickey Mouse family and other Pixar characters, aren’t supposed to be manipulated for these purposes, obviously. Although YouTube doesn’t outright ban these types of videos, if an upload touches upon any of the categories listed below, it is considered unfriendly for advertisers.
Videos depicting family entertainment characters or content, whether animated or live action, engaged in violent, sexual, vile, or otherwise inappropriate behavior, even if done for comedic or satirical purposes, are not suitable for advertising.
The growing concern is that YouTube can’t just rely on its algorithm to catch the worst offenders; some are going to slip by. This isn’t just one or two people who have managed to scam the system, either, but thousands. It’s not enough to just hope the ever-changing algorithm will catch up to the type of governance we need. Like the 4chan campaign that managed to splice pornography into children’s shows and game the algorithm to ensure the videos were monetized and seen, there are countless ways people can rig YouTube’s algorithm to their advantage.
Whenever I write about these types of stories, I think back to Select All’s Max Read, who has written most eloquently about the subject. Speaking of similar problems on Twitter and Facebook, Read evoked a concept arguing “if you lead a revolution, at some point you’re going to have to govern.” He also noted another interesting point about the new world we live in, one overruled by DIY content, and it’s something that sums up the YouTube scenario best.
“Now that the world is changed, it’s up to them to ensure that the best values of the past endure,” Read wrote. “They could start by at least acknowledging the problem.”
For YouTube, that acknowledgment comes in addressing that the algorithm it has in place is, and has been for a while, deeply flawed.
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://www.polygon.com/2017/11/7/16620400/youtube-algorithm-kids-programming
Downvoting a post can decrease pending rewards and make it less visible. Common reasons:
Submit