A CSAM Primer for Trust & Safety Professionals

As featured on

L1ght Keeps CSAM Out of Search Results on Brave’s Search Engine

“L1ght’s Anti-Toxicity AI helps us maintain search result integrity that keeps Brave users safe by identifying and removing online toxicity without impacting privacy.”

Josep M. Pujol, Chief of Search, Brave

A New Approach to CSAM Detection


Known CSAM

Unknown CSAM


10 - 30%


L1ght AI



L1ght AI + Moderation




Contact us for integration details and to schedule a demo.

The spread of Child Sexual Abuse Material (CSAM) is a leading concen for Trust & Safety teams working to eliminate toxic content from their platforms and uphold company policies. The most common practice for identifying CSAM has been using hashes to compare against a central database of known CSAM.

L1ght brings a new and thorough approach that lets you act on CSAM immediately without waiting for someone else to report it.

L1ght’s CSAM detection technology is currently used by US law enforcement:

The Challenges
With Hashes

A hashing algorithm represents an image, video, or another form of data that can be used to determine if two files are identical or similar.

Similar to fingerprints, hashes are unique to the image they represent. This creates challenges for identifying CSAM as any slight change to the content will prevent the hash from matching, for example:

L1ght’s Machine

L1ght’s anti-toxicity AI helps protect sites, apps and platforms from CSAM by continuously analyzing each image or video by running state of the art Machine Learning (ML) micro-classifiers.

Introducing L1ghtning:
The Trust & Safety API

Create AI Guardrails to Curb Online Toxicity on Your Platform.

Implement brand-protection, reduce user churn, and enhance trust & safety with 90+ models that analyze Text, Photos, Videos.

L1ghtning For Web Hosting Providers

Scan your hosted sites for problematic and illegal content, images and videos, including: Adult, Hate & CSAM.

Our Work Saves Children.

L1ght’s analysts, engineers and data scientists continuously work with technology companies and law enforcement to identify and bring child predators to justice.

L1ght Helps Facebook Remove 120,000 Child Predators from WhatsApp

Front-page story on the Financials Times

L1ght Helps Bing Remove CSAM from Search Results

It started with an anonymous tip to TechCrunch

Latest News

No data was found

The L1ghtning API:

Taxonomic Classification

Web Services

Learn how we combine our product philosophy with our approach to data science as we create and expose an ever growing, breathing taxonomy of Online Toxicity.

Integrating anti-toxicity AI should be as easy as integrating a payments API.

Use our structured API or customize our 90+ models to identify, flag & respond to:

Bullying, Harassment, Predatory Behavior, Hate, Self-Harm, Grooming, CSAM.

NCMEC & Law Enforcement Reporting Included.

L1ghtning Strikes

Image & Video Analysis

L1ghtning is used to analyze millions of images per month for both known & un-hashed CSAM.

Website Analysis

Millions of websites are being scanned every month for CSAM, adult and illegal content using L1ghtning.

Messaging Analysis

L1ghtning integration scans in-app messaging for abusive language and behavior.