Now that social distancing has become the norm, platforms like Zoom and Google Hangouts/Meet have become increasingly popular – and necessary. Countless organizations, from day schools and universities to startups and nonprofits, have moved their meetings over to online teleconferencing providers.
Zoom and platforms like it are of enormous benefit to organizations that need to communicate virtually, but they come with their own set of challenges and flaws – including, in some cases, enabling online toxicity to reach a much wider audience than ever before. We’ll take a look at some examples of those challenges and see what can be done to mitigate their impact.
Millions of users have flocked to platforms like Zoom due to social distancing and the Work From Home (WFH) environment. Zoom’s stock spiked by over 100% since January due its rapid explosion of user growth. The Zoom CEO announced that the platform went from roughly 10 million daily participants, to over 200 million since January. But not all those users have the best intentions.
A new type of online trolling, called “Zoom-bombing” has surfaced. Zoom-bombing typically involves an outside party infiltrating a zoom call with the intent to disrupt or harass participants, or in the worst cases, share explicit content.
Chris Hadnagy, of the Innocent Lives Foundation, shared some mild examples of Zoom-bombing. These included incidents such as students taking over a classroom session and kicked the teacher out, or kids posting inappropriate messages and links in the chat feature of the conference call.
But Zoom-bombing is most worrying when it comes from users who are not supposed to be on the call in the first place. A recent case of Zoom-bombing highlighted the dangers it can pose to young children.
A conference call hosted by Grovecrest Elementary in Utah ended with a hacker hijacking an unsecured Zoom session. The school’s principal began a session with nearly 50 students on Zoom. A few minutes after the call began, an unidentified user entered the meeting and proceeded to expose the students to explicit material.
“He [the principal] heard someone behind the scenes use profanity, and then some pornography was put on the screen,” said Kimberly Bird, spokesperson with the Alpine School District. “He said, ‘Oh my gosh, oh my gosh,’ and shut the meeting down.” The graphic images were only on the screen for a total of three seconds, but it was “three seconds too much,” Bird said.
A similar story occurred in Long Island, but this one contained hate-speech and bullying as well.
These examples are far from isolated incidents. Comparable Zoom-bombings have been reported with worrying frequency since social distancing became global policy.
Staying Safe While Staying Home
Keeping your Zoom calls secure, especially when children are participating, is an absolute necessity. Here are some tips to help your Zoom call stay safe:
- Don’t use your personal meeting ID for meetings that you are hosting. Instead, use an ID that is exclusive to each meeting.
- Enable the “waiting room” feature so that you, as the host, can see who is attempting to join each meeting before granting them access.
- Disable the ability for others to join before you, or whoever the host may be. Also disable screen-sharing for non-hosts to prevent people from sharing whatever they want.
- Once the meeting begins and everyone is in attendance, lock the meeting to outsiders and assign at least two meeting co-hosts.
L1ght & The Fight Against Online Toxicity
Zoom-bombing and other kinds of teleconferencing trolling are just the latest mutation in a plague of online toxicity that has been steadily growing.
Recently, L1ght conducted a thorough deep-dive into recent examples of online hate speech and cyberbullying. The results were staggering. L1ght found substantial increases in hate-speech, abusive hashtags, and traffic to known hate-sites since the coronavirus outbreak began.
People are spending more time than ever on online platforms and this puts everyone, especially children, in harm’s way.
L1ght is an AI-based company that detects toxic online content to protect children. L1ght uses sophisticated algorithms to serve as solutions for social networks, search engines, gaming platforms, and hosting providers to identify and eradicate online toxicity such as cyberbullying, harmful content, hate speech, and predatory behavior.
To learn more about L1ght, reach out.