“Parental guidance suggested.”
“Mature audiences only.”
“Parental advisory warning.”
For years, the film, music, and video game industries have played a key role in protecting your children from potentially damaging content. They have screened media to make sure that media is properly labeled to avoid exposing young people to inappropriate content. This has not always worked since kids can always find ways to get their hands on things they’re not supposed to; but, as a parent, you knew what signs and symbols to look for when letting your kid consume media.
The internet has fundamentally changed how children interact with digital content. While there are some sites that use “protection” measures, these aren’t enough to stop your kids from viewing content they shouldn’t. Pornographic websites and the darker sides of 4chan and Reddit rely on a system where you see a message explaining the collection of cookies and input your birthday. It is little more than the honor system. Some even argue that it amounts to the same thing as kids buying cigarettes simply because they told the clerk they were 18-years old.
Though seemingly tamer than previously mentioned websites, there is a growing concern that YouTube might no longer be safe for your children. It is a universal platform. How does it threaten the safety of children?
In late November, a disturbing phrase began trending on YouTube. If you typed in “how to have” in the search bar, the autofill results listed things such as “how to have s*x in school” and, worst of all, “how to have s*x with kids.” These results came about as a result of maliciously motivated trolls, taking advantage of YouTube’s and Google’s search algorithms to make certain searches appear far more popular than they really are.
Only a few hours later, YouTube had fixed the issue, removing the unsettling search from its autofill suggestions.
This incident points to a far broader trend in digital entertainment. Inappropriate content is available to, and often even targets, children.
Even videos featuring popular characters from Nickelodeon and Disney can have adverse effects on children. These large companies often don’t make all the content containing their beloved mascots themselves. Instead, obscure production companies may use these intellectual properties to crank out content. They are not always necessarily concerned with the actual content, either. These companies produce videos laden with keywords to increase hits and deliver advertisements.
Then there are those videos that feature popular characters but for inappropriate purposes. These can be anything from satirical videos meant for adult consumption to downright disturbing clips of your child’s favorite heroes engaging in lewd or violent acts. Videos such as these may even have earning potential, using advertisements to feed into the company’s coffers and, in turn, generate revenue for Google, as well.
The YouTube Algorithm
Why is all of this happening? The answer lies in the algorithms major sites such as Google and YouTube use to suggest relevant or trending content. Ever wonder why you find yourself inexplicably falling into the YouTube Rabbit Hole as 4 AM approaches? Thank algorithms.
In short, the YouTube algorithm is the way in which the site determines which videos you see. The computerized system not only determines what content you see but also where you see it. It impacts where targeted content appears: on your homepage, the trending stream, viewer subscriptions, notifications, and the suggested videos stream. YouTube explains that the purposes of targeting content are to “help viewers find the videos they want to watch and maximize long-term viewer engagement and satisfaction.”
Though the exact details of the system are closely guarded, the basic architecture relies on a few key points. YouTube follows the audience by tracking:
- what they watch
- what they don’t watch
- how long they watch
- likes and dislikes
- “not interested” feedback
This algorithm exists to help content creators tailor their videos to fit a specific audience. IT is also easy to manipulate. Through the use of SEO tactics, trending topics, and buzzwords, producers can increase the likelihood that their channels appear in suggested results. Much like the trending searches, people can band together and generate hits on any video to increase its visibility.
Shortly after the suggested search debacle, YouTube proceeded to remove thousands of videos, terminated over 50 channels, and removed ads from close to four million videos across the platform. They also released updates to their protection policies for families and children. In this update, they promise to take action with tougher application of Community Guidelines, partnering closely with regional authorities as necessary, further removing ads from inappropriate content targeting families, blocking inappropriate comments from videos featuring minors, and providing guidelines for creators making family-friendly content.
Despite all of these increased safeguards, many say that it is not enough. With over 400 hours of content uploaded each minute and one billion hours of video viewed on the site each day, experts don’t think that there is an easy way to regulate content. They claim that even with human moderators and computerized systems that flag down questionable content, the platform simply has too much information to make rigorous moderation possible.
Do you think there may come a time when YouTube is completely safe for families? Is regulation just the first step in controlling what kind of content is available on the internet? How can parents better understand and control what their children have access to? Share your thoughts on keeping families safe in the comments below.