Much like the 19th century American Western Frontier, the Internet is a rapidly expanding landscape with tremendous potential for development.
But not everyone on the digital frontier is a noble sheriff or chivalrous cowboy. There are some people out there who want to take advantage of this loosely regulated landscape for personal gain. While our technological developments have obviously ushered in an age of amazing interconnectedness, serious steps need to be taken in order to ensure internet safety and actively prevent malicious behavior online.
The term “internet safety” can mean a variety of things. It can refer to keeping your Bitcoin wallet tightly shut, making sure your Social Security number doesn’t end up on Twitter, or even ensuring that your WiFi password doesn’t get hacked. We’re going to address the aspect of internet safety known as “online toxicity.” While itself a broad term, online toxicity generally refers to hostile interactions like cyber-bullying and predatory behavior that take place online.
A chilling study from The Center for Cyber Safety and Education reported that 40% of kids in grades 4-8 admitted to chatting with a stranger online. Over 20% actually went on to have phone conversations with those strangers as well. Many even had physical meetings at the mall, a parking lot, or even sometimes even their own home.
Clearly, when it comes to internet safety, we’re faced with a massive challenge.
Challenges to Internet Safety
One of the major challenges to keeping children safe online is the anonymity offered to would-be predators and cyberbullies. For example, even though an Instagram or Twitter profile might have the picture and bio of a 12-year old, there’s really no knowing whose fingers are doing the typing. Anyone can make a profile and furnish that profile with any details they want. Sometimes, as is the case on platforms such as Reddit, there’s no need to provide any personal information beyond a screen name.
And it only gets worse from there. Services like Virtual Private Networks (VPN) take all data that leaves a computer and encrypts it, wrapping each packet in jumbled code that makes it unreadable. This makes it extremely difficult for online platforms to ban predators and even for law enforcement to track them down.
What are Platforms Doing to Help?
Internet safety is a serious issue for all online platforms and it becomes especially relevant the more popularity a platform enjoys. The best-known platforms have attempted to address internet safety in different ways:
YouTube had been widely criticized in the past for a nebulous harassment policy and for not doing enough to prevent bullying and abuse. Recently, however, YouTube has responded by banning content that contains slurs based on race, gender orientation, or sexual identity. They have stated that this ban includes everyone from private individuals to public officials. Furthermore, YouTube said it will remove content from channels and terminate a channel altogether for repeated offenses.
TikTok has gained 500 million active users in the three years since launching in 2016. It has been in the news recently for an alleged predator problem and it has also been alleged that “TikTok struggles to protect teenage users from toxic videos.”
TikTok has begun to take steps to protect its users. It recently introduced family safety mode for example, which links a parent’s account to their child’s account. Cormac Keenan, TikTok’s head of trust and safety in Europe, said the following: “When people use TikTok, we know they expect an experience that is fun, authentic, and safe. As part of our commitment to safety, the wellbeing of our users is incredibly important to us.”
The family safety mode, together with another safety feature recently launched called “Screentime Management”, show that TikTok is beginning to seriously address this challenge.
According to Facebook, who are also responsible for platforms such as Instagram and Whatsapp, their approach towards ensuring internet safety is mainly focused on investigations and awareness. They investigate predatory and criminal behavior, turn to law enforcement when appropriate, provide support to online abuse victims, and attempt to prevent additional cases of abuse by promoting community education and awareness.
Facebook does have parental controls in place which can help regulate kids’ online activities.
Microsoft even took this one step further with the launch of its tool codenamed Project Artemis. This tool is an embedded feature that analyzes chats in online platforms and video games in real-time and attempts to identify chat patterns that resemble predatory grooming. When it identifies a threat, Artemis flags the conversation and forwards it to a content reviewer who determines whether or not to contact law enforcement.
The steps taken by these platforms to improve internet safety are incredibly important ones. But as Courtney Gregoire, Microsoft’s chief digital safety officer, said about Artemis, it’s a “significant step forward” but “by no means a panacea.”
A Fully Equipped Solution
The internet, as the next digital frontier, is certainly exciting and full of opportunity. On the flip side, it also plays host to predators and bullies.
We demand safety in our neighborhoods and shopping malls; we should demand no less in terms of internet safety. Our children should be able to run free and play online the same way we let them go to a friend’s house for a play-date.
L1ght was created with this goal in mind. L1ght offers cutting-edge real-time safeguards against bullying, predators and online toxicity in general. Harnessing the power of deep learning and AI, L1ght can be embedded in any number of social platforms or gaming services to provide internet safety for all.