Until now, many tech companies have been largely protected from any toxicity found on their platforms by Section 230 of the Communications Decency Act in the U.S. and similar protections around the globe. This type of legislation places much of the responsibility for content found on platforms, and potential online toxicity, on the users that post it; not on the platforms that host it. With recent developments, all this could change dramatically.
Tech regulation rarely stays confined to a certain jurisdiction. Much like the European privacy laws (GDPR) migrated to the US (in the form of COPPA and CCPA for example), President Trump’s recent executive order correlates with other European legislation set to place more responsibility upon tech companies – including social networks, gaming platforms, and web hosting services – for the content found on their platforms. Platforms may be forced to significantly ramp up efforts to prevent toxicity such as hate speech, child sexual abusive materials (CSAM), self-harm, and predatory behavior on their sites.
In this post, we’ll take a look at how companies will need to respond to these changes, how they will need to deal with online toxicity, and the steps they can take to ensure security.
With Great Power Comes Responsibility
Just as car manufacturers ship vehicles with seat belts to protect drivers and passengers, so too – according to many online safety advocates – should platforms hosting content provide some form of protection and safety mechanisms for their users.
As L1ght CEO Zohar Levkovitz puts it, “Now that social networks are so powerful, with that power comes responsibility. Social networks are the only product we are consuming where no one is liable for our safety.”
While freedom of speech is critical, the end result should guarantee this freedom while preserving protections, especially for younger users. The executive order has stirred this debate even further. However, both sides of the debate would agree with Levkovitz when he notes “Let’s use this to start a conversation about how we can solve this problem in the industry.”
Content Responsibility: Global Highlights
As noted above, the recent executive order seeks to hold online platforms to a higher degree of responsibility for content that they host. The way it does this is by limiting the protections these platforms enjoy under Section 230 of the Communications Decency Act.
The Communications Decency Act was passed in 1996. Section 230 was originally added to the legislation as a way to protect news outlets from the speech found in comments sections, online forums, and other websites where people could contribute their thoughts.
With Section 230 under attack, platforms hosting content can expect to have a bigger job facing them. They will need to actively monitor and moderate content because the blame for toxicity can now be laid on their doorstep.
In France, a stronger piece of legislation was recently passed. The National Assembly passed a law that gives platforms a one-hour deadline to remove illegal content after being instructed to do so by the authorities.
Toxic content – for example, hate speech, racist comments, or religious bigotry – is required to be removed within 24 hours of it being reported by users. If companies don’t comply, they can face fines of up to 4% of their global revenue.
Germany has approved a bill that requires online platforms to be more proactive and report online toxicity to the authorities. This is in addition to the requirement already in place that social media companies delete harmful content within 24 hours.
The law extends to multiple forms of toxicity, including extremist propaganda, graphic portrayals of violence, threats, or distribution of child sexual abuse images.
Platforms that do not comply with the bill could face fines of up to €50 million.
The British government intends to give regulators the power to fine social media companies for harmful material on their platforms. This will be done through the U.K.’s telecommunications watchdog, Ofcom.
Julian Knight, the head of the British Parliament’s Digital, Culture, Media and Sport Committee – which covers the regulation of all media companies – referred to “a muscular approach” regarding regulation in this area.
As we’ve seen, global change is imminent. Governments worldwide are shifting the responsibility for online toxicity from the individuals that post it, to the platforms that host it.
At L1ght, we have built technology that can identify and prevent online toxicity, and our tech is currently being used by law enforcement agencies, multi-billion dollar publicly traded companies, and private entities. L1ght believes it’s a crucial partner and natural ally of online platforms in the fight to keep platforms toxicity-free while preserving freedom of speech and complying with emerging legislation.
Contact L1ght to find out more about how L1ght’s technology can be integrated into any platform to stop online toxicity.