Dozens of YouTube Channels Are Showing AI-Generated Cartoon Gore and Fetish Content

A WIRED investigation found that dozens of YouTube channels are using generative AI to depict cartoon cats and minions being beaten, starved, and sexualized—sparking fears of a new Elsagate wave.
Image of a kitten that looks normal within a play button symbol and distorted outside of it
Photo-illustration: Jacqui VanLiew; Getty Images

Somewhere in an animated New York, a minion slips and tumbles down a sewer. As a wave of radioactive green slime envelops him, his body begins to transform—limbs mutating, rows of bloody fangs emerging—his globular, wormlike form, slithering menacingly across the screen.

“Beware the minion in the night, a shadow soul no end in sight,” an AI-sounding narrator sings, as the monstrous creature, now lurking in a swimming pool, sneaks up behind a screaming child before crunching them, mercilessly, between its teeth.

Upon clicking through to the video’s owner, though, it’s a different story. “Welcome to Go Cat—a fun and exciting YouTube channel for kids!” the channel’s description announces to 24,500 subscribers and more than 7 million viewers. “Every episode is filled with imagination, colorful animation, and a surprising story of transformation waiting to unfold. Whether it’s a funny accident or a spooky glitch, each video brings a fresh new story of transformation for kids to enjoy!”

Go Cat’s purportedly child-friendly content is visceral, surreal—almost verging on body horror. Its themes feel eerily reminiscent of what, in 2017, became known as Elsagate, where hundreds of thousands of videos emerged on YouTube depicting children’s characters like Elsa from Frozen, Spider-Man, and Peppa Pig involved in perilous, sexual, and abusive situations. By manipulating the platform’s algorithms, these videos were able to appear on YouTube’s dedicated Kids’ app—preying on children’s curiosities to farm thousands of clicks for cash. In its attempts to eradicate the problem, YouTube removed ads on over 2 million videos, deleted more than 150,000, and terminated 270 accounts. Though subsequent investigations by WIRED revealed that similar channels—some containing sexual and scatological depictions of Minecraft avatars—continued to appear on YouTube’s Topic page, Elsagate’s reach had been noticeably quelled.

Then came AI. The ability to enter (and circumvent) generative AI prompts, paired with an influx of tutorials on how to monetize children’s content, means that creating these bizarre and macabre videos has become not just easy but lucrative. Go Cat is just one of many that appeared when WIRED searched for terms as innocuous as “minions,” “Thomas the Tank Engine,” and “cute cats.” Many involve Elsagate staples like pregnant, lingerie-clad versions of Elsa and Anna, but minions are another big hitter, as are animated cats and kittens.

In response to WIRED’s request for comment, YouTube says it “terminated two flagged channels for violating our Terms of Service” and is suspending the monetization of three other channels.

“A number of videos have also been removed for violating our Child Safety policy,” a YouTube spokesperson says. “As always, all content uploaded to YouTube is subject to our Community Guidelines and quality principles for kids—regardless of how it’s generated.”

When asked what policies are in place to prevent banned users from simply opening up a new channel, YouTube stated that doing so would be against its Terms of Service and that these policies were rigorously enforced “using a combination of both people and technology.”

WIRED can confirm that some of the flagged channels were indeed removed last week, including two cat-centric channels featuring themes of abuse. But other linked channels with reposts of the same videos remain on the platform. Go Cat, too, is still active, and its channel description remains unchanged.

WIRED could not find an email associated with Go Cat but reached out to other channels for comment. We did not receive a response.

The explosion of AI-animated cat videos is a defining feature of Elsagate’s second wave, surpassing those of any other type both in scope and extremity of their content. With titles like “Kitten abused by its own mother”—these videos often take the form of fables, where kittens are starved, forced to do unpleasant chores, and audibly beaten by their parents with baseball bats or frying pans. They are then taken to the hospital and revived—before the parent arrives, apologetic for their actions, as melancholic music or a meowing cover of Billie Eilish’s “What Was I Made For” plays in the background. With near-identical channel names like “Cute cat AI” and “Cute cat of Ni,” experts say, they are a clear attempt to mislead young audiences—and an obvious move to lazily and sloppily monetize cheap content in ways unfathomable before the arrival of generative AI.

“We are deeply concerned about the proliferation of AI-generated content that appears to target kids and contains deeply inappropriate material,” Robbie Torney, senior director of AI programs at Common Sense Media, tells WIRED. The nonprofit, which rates and reviews media to provide accurate recommendations for families, was shown several such channels discovered during this investigation. The organization identified common themes across videos of “characters in extreme distress or peril,” “mutilation, medical procedures, and cruel experiments,” and “depictions of child abuse and torture.”

Although YouTube’s later changes, including implementing new rules in 2019 to adhere to the US Children's Online Privacy Protection Act, mean these channels now typically appear on YouTube’s main app rather than YouTube Kids—their intentions are only thinly veiled. Sounds of babies’ laughter and babbling are blended in with music and set to backdrops of bright, Cocomelon-esque landscapes. (In fact, the popular kids’ cartoon even appears in the background of some of these videos.) Although Go Cat directly advertises its content to children, others claim to be “not for kids” in the description or avoid mentioning their audience entirely. The metadata for several channels revealed some videos have been tagged with keywords such as #funnycat, #familyfun, and #disneyanimatedmovies. Others featuring polar bears and reindeer infected with parasites are tagged with terms like #animalrescue, suggesting an attempt to appear alongside more educational content.

While in 2017, Elsagate content usually featured traditional animation or even actors dressed in costume (both of which are still a part of this new wave), the arrival of generative AI means that disturbing, brain-rot-style videos can now be produced much more rapidly and by anyone, regardless of skill.

“This trend is particularly concerning because of the scale and speed at which AI can generate this content,” Torney says. “Unlike traditional content creation, AI-generated videos can be produced in large volumes with minimal oversight. Without human review in the creation pipeline, inappropriate and potentially harmful material can easily reach kids.” The comparative speed of AI also means that when one channel is flagged and removed by YouTube—another with identical reposts springs up days later.

WIRED has seen images sent by content creator BitterSnake, who was part of a wave of YouTubers shedding light on these cat-themed channels back in January of this year. Originally posted on a community tab of two now-suspended channels, they appear to show an office environment in what looks to be Asia, with young workers sitting at computer desks, making hearts with their fingers in typical workplace camaraderie. A second image shows a worker at his desk, headphones on, phone beside him, a tissue lying crumpled in the background. The scene would be utterly typical of a young student or intern immersed in his first job—if not for a computer screen featuring an adult cat, lying deceased in a pool of shimmering red blood, as its young kitten looks on, traumatized.

Tracy Pizzo Frey, senior AI adviser for Common Sense Media, recently testified at a California State Assembly hearing in support of a bill that aims to safeguard children from the risks of AI. It will require systems to be classified on a scale from “Prohibited Risk” to “Low Risk” and ban the use of controversial AI companions such as Replika by children, alongside other measures. The scale of this problem is growing—and is likely to balloon further as AI-generated kids' content continues to dwarf its traditionally animated counterparts.

WIRED has shared with YouTube more than 70 similar content-farm channels found during the course of this investigation. Most of these involve AI-generated images of cats alongside themes of gore, sex, and child abuse—and their subscriber count ranges from thousands to millions. Whether these views are coming primarily from humans, though, or are simply confirmation of the realization of the dead internet theory, is debatable—although hundreds of automated comments across these videos suggest it could be the latter.

On reviewing the channels, YouTube explained that it required all creators to label AI-generated material as such, including content aimed at kids and families, and that it had introduced a set of guidelines around what it called quality content.

“We want younger viewers to not just have a safer experience but also an enriching one,” a YouTube spokesperson says. “To support this, we partnered with experts to create a set of quality principles for kids and family content meant to help guide creators in creating quality content for kids and reduce the amount of content that is low quality, regardless of how it was created.” YouTube claims that since introducing these principles—which guide which content is monetized, shown in recommendations, and appears on YouTube Kids—viewership of “high quality” content has increased by 45 percent on the YouTube Kids app.

Still, regardless of their audience, and as YouTube’s moderators scramble to remove them, Elsagate’s successors remain on YouTube’s main platform—continuing to generate new ways to bend the rules at every turn. Nor is the problem unique to them—with similar videos having appeared on TikTok in recent months, where Runway AI generator was overlaid onto real footage from suicide and mass shootings to create “minion gore” videos, 404 Media reported. TikTok told 404 Media, "Hateful content as well as gory, gruesome, disturbing, or extremely violent content” is prohibited and said it is taking action to remove harmful AI-generated content that violates its policies.

“We recognize that short-form video platforms are working to address content moderation challenges, but the nature of AI-generated videos presents unique difficulties that may require new solutions,” Torney tells WIRED.

“The rapid evolution of AI technology demands that all stakeholders—platforms, content creators, parents, and organizations like ours—work together to ensure kids’ exposure to online video content is safe and positive.”