All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.
Elsa from Frozen with a machine gun. Paw Patrol characters visiting a strip joint. Bleeding children pranked by their parents to win subscribers. Beneath the official Disney and Nickelodeon videos uploaded to entertain and distract children, these are the curios lying in wait, gaming Google's algorithms to play automatically as soon as the last video finishes. Google is finally cracking down on such YouTube horrors, but why did it take so long for it to notice?
You don't need to spend much time on YouTube to see that the videos produced for kids are weird. Millions of views go to Kinder Surprise egg un-wrappings, nursery rhymes overlaid on footage from video games, and anything starring other children. Many of them hijack key search terms, such as "Peppa Pig" and "Paw Patrol", in order to play automatically after official content.
Some videos are simply knock-offs of popular cartoons to grab lucrative traffic, but others — as noted bya now-viral Medium post and a report in The New York Times — are designed not only for commercial gain but apparently to terrify and traumatise. Those reports describe weirdly violent mashups in which Peppa Pig eats her father and ice-princess Elsa totes a gun, which have snuck past the automated filters on YouTube Kids, intended to be a curated, safe app for children. While some are undoubtedly the work of messageboard trolls, most are genuine.
Every day, one billion hours of videos are watched on YouTube. And yet, when it came to childrens videos, it turns out nobody was actually watching. Algorithms were tasked to filter for appropriate content, but they clearly weren't up to the job. Two and a half weeks after Bridle wrote his post, Google finally addressed the problem, with Johanna Wright, vice president of product management at YouTube, claiming the firm had “noticed a growing trend around content on YouTube that attempts to pass as family-friendly, but is clearly not". But it’s a growing trend that’s been growing for years, not weeks or months.
Google frequently misses problems on its platforms, whether it's YouTube, Google News, or search. "This is part of a much larger story that's coming out with fake news," says Frank Pasquale, professor of law at the University of Maryland. "I'm just so glad to see it all exposed, because it was clear to me years ago that you were going to have lots of manipulation and exploitation of this platform by people that are going to game this system."
It's clear that wasn't so obvious to Google, which he says is frequently a step behind those trying to game its systems — so much for Silicon Valley being the cutting edge of innovation. "Google has been fighting the last war for years," Pasquale says. "They certainly have a good web spam team and teams to target search engine optimisers who were scheming to get up the rankings for commercial advantage. But they didn't really realise how they could be gamed in these other ways."
It's not only Google, of course. Silicon Valley startups and tech giants have become reactive, responding to scandal after scandal with the bare minimum to make headlines go away. Google somehow didn't notice government ads were running alongside extremist material. And both Facebook and Twitter seem bewildered that their automated monitoring tools had been gamed by Russian trolls and Nazis, respectively. The solutions offered to date amount to the smallest token gestures.
Solutions need not be complicated. Professor Sonia Livingstone, professor in media and communications at the London School of Economics, notes that one simple answer could be for YouTube to "make an option so that people could turn off the mechanism that automatically plays its next choice for you, since content creators and bots are exploiting this," Right now, that's not possible.
Instead, in its response to the violent oddities lurking on YouTube's child-focused videos, Google has promised to remove ads from dodgy content targeting kids and block comments on videos featuring minors. Google will also be "doubling the number of Trusted Flaggers" that it tasks with cleaning up its content — but the first tool it's turning to is, as always, technology. "To help surface potentially violative content, we are applying machine learning technology and automated tools to quickly find and escalate for human review," Wright wrote in a blog post.
Read more: As Congress circles, Facebook and Google scramble for transparency
Automation can solve some of the problems social networks exacerbate. Google works with groups such as the Internet Watch Foundation to create hashes of child-abuse images in order to automatically ban them from the web. When it comes to copyright, a key component of YouTube’s revenue model, systems to detect certain content are hugely sophisticated. Content ID, which launched in 2007, now contains more than 50 million reference files – making it the most comprehensive copyright database in the world.
But current algorithmic approaches still aren't good enough to truly understand content, says Dr Ansgar Koene, senior research fellow at the University of Nottingham. "They work by correlating patterns within the content — such as the use of particular word combination or image elements — that have previously been flagged by human content moderators as benign violations of the platform content policies," he explains. "The algorithms are therefore incapable of detecting novel types of violations." For example, while algorithms are good at detecting copyright material, they trip up on "fair use" exceptions, he said, such as satire or education.
In this case, the algorithm can't seem to tell the difference between Elsa singing a happy song or brandishing a gun — so human moderators may be necessary. "They need a lot more human oversight of their systems," Pasquale said. "That's something they're really going to have to think seriously about." Google has said it is doubling YouTube’s Trusted Flaggers scheme, but it hasn’t given specific numbers.
But approving the deluge of content uploaded to YouTube every minute is nigh-on impossible. "We have to accept that under the current model of rapid, instant publishing, content moderation will never be completely perfect," Koene says. "If we really want to block all content that violates the platform rules, then we would have to move to a model where platform users submit content they want to publish to an editor for approval, as we do when publishing in journals. This would transform the current Web 2.0 platforms into traditional media channels."
But it may not need to come to that. Perhaps, to start, human moderators would only look at those videos getting views above a certain mark or those that are targeted at children. Plus, as Livingstone, notes, whenever the pressure is on, such companies find a way to improve if not perfect moderation without breaking their business models. "Whenever governments or media make a fuss, the companies invest in more human moderation, so more must be possible," she says.
Even if human oversight is onerous, that's no reason not to do it. "We wouldn't allow a car company or manufacturer to put out a really dangerous product," notes Pasquale. Imagine the uproar if a car maker refused to do crash tests on the ground of costs? They'd be forced to by safety regulations. "Part of the profits of these companies is derived from negligence and recklessness," Pasquale argues. "They've been skating on this idea that the internet is this law-free zone for a decade or longer, and that's at the foundation of why they have so much money and power."
It's not only good for Google's bottom line to avoid monitoring its platforms – such questionable content is also financially beneficial. As Livingstone notes, it's Google's own business model that makes it "profitable to make weird and nasty mashups with kids' characters".
But while the entire point of web platforms is to monetise the content we upload, it's remarkable that Google has failed to be more careful when it comes to children. "The more that platforms provide for children, make decisions in relation to child safeguarding and, especially, begin to manage content on their services, the more important it is to ensure transparent and accountable processes," says Livingstone, calling for better guidance, consultation with the public, better complaint mechanisms, and "appropriate human moderation resources".
In the meantime, Livingstone recommends keeping an eye on children's YouTube binges. "It depends on the age and resilience or vulnerability of the child, of course, but the best advice is occasionally to share an interest with your child on YouTube," she says. "Don't always look over their shoulder, or check up on them secretly," but watch with them to see how they go about using the app and how they react to what they view. And make sure to turn on restricted mode for some basic protections.
Because while it's Google's responsibility to do better, at this rate, your toddler may well be a teenager by the time Silicon Valley admits it's time to hire human moderators to make up for algorithmic failures.
This article was originally published by WIRED UK