A Digital Devil’s Playground: Youtube Kids Fails to Provide a Safe Space for Children
A place described as a “contained environment for kids” seems like its primary goal would be to protect kids from danger and inappropriate exposure. Youtube Kids, however, while claiming this, has allowed sexual and violently suggestive topics to be permitted across their app.
These insincere content creators use the faults in Youtube’s algorithm to get their videos passed and exposed to young children in hopes of gaining money. They formulate videos using popular cartoon characters like “PAW Patrol” characters and Spider-Man to lure children in and therefore deceive them to see their favorite characters while displaying crude scenes. This series of inappropriate exposure through characters is known as the “Elsagate” scandal of 2017, due to a primary amount of these concerning videos involving the popular character from the Frozen movie, Elsa. Particularly, there have been depictions shared from Youtube of Spider-Man urinating on Elsa, along with the portrayal of PAW Patrol babies attempted to commit suicide.
These Youtube creators producing the videos will utilize different types of computer imagery or animation techniques to surpass the safety algorithms. These creators additionally use key words like “education” in titles to get their videos onto the screens of young viewers.
Parents around the world have allowed their children to use Youtube Kids as a place of academic and enjoyable exploration. What they didn’t realize at the time was that the app they assumed was a safe place ended up being the same place where their three year old child could easily be exposed to murder, sexual depictions, violence, and substance abuse.
Along with the use of popular cartoon characters as bait for children, money-hungry creators will also create videos as part of content farms, types of AI generated video game animations on Youtube reminiscent of Minecraft. Content farms often contain search engines allowing these videos to show up on multiple users’ Youtube pages. While adding only fallacies to Youtube, they are often still permissible due to content farm creators managing to squeeze their videos through the safety algorithm.
These inapt videos are able to be spread through Youtube because they are regulated by the computer, which is easily fooled into thinking videos are appropriate due to a few words or characters. This lack of proper protection further shows the necessity for Youtube’s human moderators to be the ones regulating these videos and keeping track of inappropriate patterns.
Youtube Kids’ main purpose is to be a safe space for children. A place safer than Youtube where it is a guarantee that children will be protected and can freely explore the app. There should be no opportunity for a parent to walk in on their child crying while seeing cartoon characters violently murder each other.
Youtube’s Negligence
YouTube is a vacuum of information: short-form, long-form, gameplay, commentary, tutorials, documentaries, and beyond. The platform hosts a plethora of creators on its website alone, with the appeal being that anybody can upload whatever content they desire so long as it fits within YouTube’s specifications.
However, as time has progressed in the online space, many creators and viewers alike have encountered roadblocks, or even very concerning content that has made the platform have a more bitter, ugly underbelly that has gross connotations. Specifically, the culmination of negligence toward the viewers and creators shown by the platform over the years has amassed in various ways that have had a staggering effect on the internet space as a whole.
Shadowbanning and YouTube Monetization
For many creators of the platform, YouTube is more than just a hobby: it is a career. YouTube has become a sustainable form of livable income for its creators. However, creators have felt the pressure in recent years that this income and platform they have worked so hard to maintain has strings that are unpredictable and fickle, making it often impossible to understand how YouTube will choose to moderate their channel.
During YouTube’s early days, the guidelines for content were shaped mainly by the events occurring around the time of its massive uptick in popularity. In 2010, YouTube rolled out changes to the platform such as the ability to flag harmful or even terrorist content. From there, some other rapid-fire changes came about with the introduction of other rudimentary guidelines, including restricting graphic content and inappropriate language.
The first real event that shaped YouTube dramatically came about in 2016, with the “AdPocalypse”, dubbed by creators. During this, YouTube’s advertisers who commonly ran ads on the platform refused to continue running their ads unless YouTube shifted its platform to become more family-friendly and to restrict inappropriate content.
To some, this was a change that would ultimately shape YouTube for the better. Having the platform become more welcoming to people of all ages would help creators all around. However, many began to take issue with how YouTube was specifically rolling out these changes to the platform. They found that they began to get flagged for inappropriate or copyright content without having much of a voice in the matter.
Many smaller creators felt they would be the target of these unreliable content-moderation restrictions and would not have a voice within YouTube to speak against it, as they didn’t have any sort of representation within the YouTube team.
Rob Gavagan, a small creator who made true crime content, for example, had spoken out about the limits on his revenue due to unclear and unfair restrictions on his content. He made a video, in which he spoke specifically about how these changes would affect horror or thriller creators who didn’t have a clear guide on where to go with their content.
Compared to now, YouTube has changed its moderation to give the creators more leniency. They did restructure their advertiser guidelines to ensure that more content creators could specifically have tools built into their dashboard to help sort through their content, and could easily fight the claims of their content being for only 18+.
However, many creators are still waiting for YouTube to become far more transparent than this with their users, and want more outlets to share their opinions about platform changes. YouTube has evolved a lot since the age of the AdPocalypse and has become a full source of revenue for many across the globe. With that in mind, many creators feel that YouTube shadowbanning, censorship, and unfair demonetization endangers not only their platform but their income and livelihood.
Children and YouTube Exploitation
In 2017, parents began to notice their kids stumbling upon strange videos, many of which appeared on the YouTube Kids app, specifically dedicated to children browsing the site. These videos included seemingly innocent concepts, such as cute thumbnails and animations, but with oddly gruesome imagery.
This was deemed as a controversial scandal and was branded as “Elsagate” — due to the most notable series of ominous, borderline explicit kids’ content seeping through the cracks being an Elsa and Spider-Man YouTube channel, with videos about Elsa and Spider-Man saying or acting in subtly explicit ways.
YouTube responded to this issue in 2017 with a series of guidelines published by the platform to specifically add more restrictions against this kind of content. They specifically blocked the monetization of branded characters and said in a press release that they were going to be “taking an even more aggressive stance by turning off all comments on videos of minors,” and “[double] the number of Trusted Flaggers we partner with in this area.”
However, as this issue subsided for some time, during quarantine people saw a rise in more kids-based clickbait arising under the guise of being family-friendly videos. Minecraft, Among Us, and other popular kids’ games were used to lure kids in with flashy thumbnails, before throwing in blatantly inappropriate footage of explicit content or graphic imagery that would gain traction on the YouTube Kids platform. Once again, YouTube put out another statement, saying that in response to the discovery of this pocket of content, they planned to crack down on this system and specifically look into making sure the algorithm could not be used for exploitation.
Around the same time as this discovery, users began to take notice of specific channels that were made to seem like “for-kids-by-kids” platforms. Parents would film their children doing activities such as unboxing toys, or acting out skits. However, after an investigation done by WIRED, a separate pocket of this content was uncovered. Channels that were heavily dependent on kids being the content began to be examined under a closer lens, and many realized that there were blatant signs of abuse and exploitation present in their content.
Take for example the channel ToyFreaks, a channel that had amassed over 8 million followers and averaged around 300,000 views per video, which was terminated due to the content they were putting out featuring the kids of Gregory Chism, who ran the channel. The videos would feature Chism pranking his two daughters in ways that would put the children in exceedingly precarious and explicit situations. The channel was taken down in late 2018 after YouTube took notice of the complaints being filed and struck it from the platform.
This has been a recurring issue on the platform since then, however. Kids channels such as DaddyOFive, or more recently 8 Passengers, being exposed for malicious activity and even abuse against the children present in the videos has led to a new wave of child exploitation seen, and has been partially propagated by the algorithm of YouTube.
However, that’s not to say that YouTube hasn’t been acting against these channels. They are receptive to actively working with law enforcement as well as other whistleblowers to take as many exploitative channels down as possible. But, it raises the question of whether or not there could be more done to prevent these channels from being able to rise to popularity so quickly. After all, with the existence of YouTube autoplay, those who click on one of these disturbing videos will be automatically fed more of them.
Regardless, the effects of children’s interactions with the platform are still being explored as the younger generation that grew up with YouTube begins to enter adulthood.
According to a study done by the Pew Research Center in 2018, 81% of parents say their child watches content on YouTube regularly. Since then, that number has drastically increased. With that many children watching the platform, the responsibility of keeping it safe for kids or providing children with a safe space becomes increasingly important.
With 90% of the brain fully developing by age 5 and the prefrontal cortex rewiring until age 25, infantry through adolescence marks a crucial period for physical, emotional, psychosocial and behavioral development. We stress the importance of proper nutrition – what’s fed into the mouths of our children – to fuel physical growth, but it’s equally vital to address the information that’ll cement lifelong habits and belief systems – i.e., what we feed their brains.
Guardians can take precautions through close monitoring or website blocking, but surface-level security has become insufficient. It’s when explicit content disguises itself as kid-friendly – bypassing censorship – that parents lose awareness, much less control, of the ideas fed to their children.
Consider the fact that boys, on average, are first exposed to pornography at age 12, with early onset users demonstrating greater levels of body dissatisfaction, acceptance of sexual harassment and sexual aggression; likewise, a study conducted among 887 adolescents also linked adult criminal and violent behavior with exposure to television violence 15 years prior. From addiction to links with future abuse, it’s abundantly clear that early contact with uncensored content can irreversibly poison the moldable minds of our youth. But when creators conceal mature themes behind innocent cartoon or video game characters, we must question what’s being taught to children when we turn on the iPad – and ultimately, the tools, or weapons – we’re equipping them with to carry into adulthood.
Specializing in young children and crime exposure, Connie M. Tang is a psychology professor at Stockton University with a Ph.D in developmental psychology. We spoke to her about the long-term effects of explicit entertainment viewership on children.
Q: To begin with, could you give us an overview of your background?
A: Yes, so I’m from mainland China. I came to the United States when I was 21 to obtain my master’s degree in social work, obtained my license in clinical social work and then worked in child protection in California for about three years… I obtained my PhD in developmental psychology, which is the field about how children mature into adults. So, since then, I have tried to combine my two interests in my research and teaching. My number one [interest] is about children as victims of crime.
In graduate school, I worked in a psychology and law lab, and it actually looked at how the public perceives juvenile trial adults – meaning young people who have committed pretty serious crimes. In my clinical work, I found that many times when you have a child who is abused, there is a pretty big positive correlation with the child becoming an offender. So victimization and offending are closely correlated.
Q: What do you think, psychologically, are the long term implications of early exposure to mature content on YouTube or similar platforms?
A: Because of the process of desensitization, you know, the sensational content at the same level is no longer as attractive as it was for the initial exposure. So it doesn’t stop… just at that level, and it spirals into addiction or reliance. Over a long period of time, that material no longer makes as deep an impression as before.
Q: How can that kind of affect people’s relationships as they move forward in life?
A: Over time, I think there could be a blurring [between] reality and media… When you think about school shootings, many times when you see those well known ones, you can see that the way that they kill people was almost like playing a video game. So, in violent media games, you only score points, but you don’t see the suffering or deaths of people who, you know, went through that in real life. It’s dangerous to think that media is a true representation of reality.
Q: You talked about the correlation between victimization and future offenses. Could you dive a little bit deeper into that?,
A: [A psychologist], Albert Bandura… has a theory [about] learning through role modeling. He designed a research project, exposing children to violence… and in that case, it was just hitting and cussing at an inflatable, life size doll… He found that when children were exposed to aggressive adult role modeling, they were also more likely to hit the dolls when they were put under a stressful situation. For example, these children were led to a room where there was a display of very attractive toys. And then the children were told that those toys were only reserved for special children, but that [they were] not really special, and therefore couldn’t play with them. It’s very simple manipulation to introduce frustration. So, the children who were exposed to the violent adult role model became aggressive toward each other in addition to the doll. But if there was no negative adult role modeling, there [was] no increased aggression. So when you experience maltreatment, let’s say in the form of physical abuse at home, then you imitate what your parents and your caregivers did to you because that’s sort of normal. That’s a part of what you know from growing up. In the context of media exposure, there’s a similar link between physical abuse and maltreatment and violent offending later down the road. If kids, when their brains and sense of morality are still developing, are constantly watching inappropriate things, they’re not going to process how those things aren’t normal.
Q: We touched on the behavioral/mental effects that flat out exposure to pornography has on kids, but how do you think the consequences differ when it comes to content that’s disguised as family-friendly?
A: I have reasons to say either could be more damaging. So, of course, for outright pornography, you can say that it’s not relatable to real life, even though it seems more damaging on a surface level. But I think that disguising it as comedy or something can be more insidious because it teaches kids that these things are okay and not to be taken seriously to begin with. We talked about the link between victimization and future offenses, and I think we can use similar logic. Kids trust their parents and see them as role models, so if they’re abusive, the children are less likely to register that as wrong. If parents hand their kids an iPad with Youtube Kids and the videos send bad messages, why would the children, who haven’t been taught any better, see anything wrong with what their parents are letting them watch?
The average attention span of a goldfish is nine seconds. Yet, in 2015, through a study surveying 2,000 Canadians and studying the brain activity of 112 others, Microsoft found that our average attention span had dropped from 12 to eight seconds since 2000. The perpetual, unadulterated bombardment of content streamlined to us on platforms such as YouTube has undoubtedly played a role in this decline. Think of split-screen Shorts playing a Subway Surfers recording while telling a story, or the average Mr. Beast video packed to the brim with punchy sound effects and flashy visuals. These videos are made to seize your attention for as long as possible, and in doing so, they’ve nullified your ability to control that attention.
So, when our time and focus are the most valuable commodities in the world, shouldn’t this be alarming?
Well, YouTube knows what it’s doing. Its business model relies on it. These algorithms were created for the sole purpose of providing users with favorable content in order to optimize engagement, thus increasing profits. Through analysis of users’ actions, behaviors, and interests, algorithms individually control content recommendations in the hopes of retaining the attention of viewers for as long as possible.
However, under the guise of “prioritizing educational and entertaining content,” Youtube Kids’ algorithm can inadvertently push inappropriate videos into children’s feeds, promoting problematic content due to a lack of proper moderation. Additionally, the algorithm’s focus on viewer retention and clicks can cause it to suggest high-performing videos regardless of the actual content within the video. As a result, videos with more likes or views can often make their way into children’s recommended lists despite the presence of disturbingly sexual or violent themes.
Furthermore, younger children lacking the judgement skills that most adults have are more likely to click on whatever appears the most stimulating with bright, colorful cover screens or exciting titles, regardless of whether the content seems appropriate or not. The algorithm then amplifies the amount of these videos in children’s recommended content, leading children to more inappropriate content that they may not have actively searched for.
Without a proper filter on Youtube Kids’ algorithm, there’s nothing to distinguish between actual educational videos and inappropriate content that simply bypasses content moderation systems through the use of bright colors or animations. Ultimately, Youtube Kids must refine its algorithm to more adequately filter through and remove harmful content if it is to truly ensure reliable protection for its young users.