A CSAM Primer for Trust & Safety Professionals

As featured on

A New Approach to CSAM Detection

Method

Known CSAM

Unknown CSAM

Hash

10 - 30%

0%

L1ght AI

85%

85%

L1ght AI + Moderation

97%

97%

AVAILABLE NOW FOR: SOCIAL PLATFORMS, ADULT SITES, BROWSERS, SEARCH ENGINES

Contact us for integration details and to schedule a demo.

The spread of Child Sexual Abuse Material (CSAM) is a leading concen for Trust & Safety teams working to eliminate toxic content from their platforms and uphold company policies. The most common practice for identifying CSAM has been using hashes to compare against a central database of known CSAM.

L1ght brings a new and thorough approach that lets you act on CSAM immediately without waiting for someone else to report it.

L1ght’s CSAM detection technology is currently used by US law enforcement:

The Challenges
With Hashes

A hashing algorithm represents an image, video, or another form of data that can be used to determine if two files are identical or similar.

Similar to fingerprints, hashes are unique to the image they represent. This creates challenges for identifying CSAM as any slight change to the content will prevent the hash from matching, for example:

L1ght’s Machine
Learning

L1ght’s anti-toxicity AI helps protect sites, apps and platforms from CSAM by continuously analyzing each image or video by running state of the art Machine Learning (ML) micro-classifiers.

Introducing L1ghtning:
The Trust & Safety API

Create AI Guardrails to Curb Online Toxicity on Your Platform.

Implement brand-protection, reduce user churn, and enhance trust & safety with 90+ models that analyze Text, Photos, Videos.

L1ghtning For Web Hosting Providers

Scan your hosted sites for problematic and illegal content, images and videos, including: Adult, Hate & CSAM.

Our Work Saves Children.

L1ght’s analysts, engineers and data scientists continuously work with technology companies and law enforcement to identify and bring child predators to justice.

L1ght Helps Facebook Remove 120,000 Child Predators from WhatsApp

Front-page story on the Financials Times

L1ght Helps Bing Remove CSAM from Search Results

It started with an anonymous tip to TechCrunch

Latest News

November 19, 2020

L1ght raises $15 million for AI that protects children from online toxicity

It’s nearly impossible to monitor massive platforms manually, which is why automation and AI are playing increasing roles in the gatekeeping process.
November 19, 2020

Doing Good Is No Longer Just For Nonprofits — It’s Also Tech Companies’ Responsibility

Businesses tend to do best when built around specific problems, child abuse included. L1ght, which uses AI to prevent people from hurting kids using toxic content online, has discovered toxic findings hosted via GIPHY, CloudFlare, Bing, and more.
November 18, 2020

Content Moderators Alone Can’t Clean Up Our Toxic Internet

Leaders of popular online platforms have too often failed to adequately support moderators facing such emotionally overwhelming content.
November 18, 2020

As Children Spend More Time Online, Predators Follow

Reports of online child exploitation have risen since the start of the coronavirus pandemic.

The L1ghtning API:

Taxonomic Classification

Web Services

Learn how we combine our product philosophy with our approach to data science as we create and expose an ever growing, breathing taxonomy of Online Toxicity.

Integrating anti-toxicity AI should be as easy as integrating a payments API.

Use our structured API or customize our 90+ models to identify, flag & respond to:

Bullying, Harassment, Predatory Behavior, Hate, Self-Harm, Grooming, CSAM.

NCMEC & Law Enforcement Reporting Included.

L1ghtning Strikes

Image & Video Analysis

L1ghtning is used to analyze millions of images per month for both known & un-hashed CSAM.

Website Analysis

Millions of websites are being scanned every month for CSAM, adult and illegal content using L1ghtning.

Messaging Analysis

L1ghtning integration scans in-app messaging for abusive language and behavior.