Introducing L1ghtning:
The Trust & Safety API

Create AI Guardrails to Curb Online Toxicity on Your Platform.

Implement brand-protection, reduce user churn, and enhance trust & safety with 90+ models that analyze Text, Photos, Videos.

L1ghtning For Web Hosting Providers

Scan your hosted sites for problematic and illegal content, images and videos, including: Adult, Hate & CSAM.

L1ght Surpasses Google, AWS & IBM on Sexual Content Image Recognition

Data Sets: Pexels, INRIA Person Dataset,  Human Action Classification, Adultlabs

Our Work Saves Children.

L1ght’s analysts, engineers and data scientists continuously work with technology companies and law enforcement to identify and bring child predators to justice.

L1ght Helps Facebook Remove 120,000 Child Predators from WhatsApp

Front-page story on the Financials Times

L1ght Helps Bing Remove CSAM from Search Results

It started with an anonymous tip to TechCrunch

Latest News

November 19, 2020

L1ght raises $15 million for AI that protects children from online toxicity

It’s nearly impossible to monitor massive platforms manually, which is why automation and AI are playing increasing roles in the gatekeeping process.
November 19, 2020

Doing Good Is No Longer Just For Nonprofits — It’s Also Tech Companies’ Responsibility

Businesses tend to do best when built around specific problems, child abuse included. L1ght, which uses AI to prevent people from hurting kids using toxic content online, has discovered toxic findings hosted via GIPHY, CloudFlare, Bing, and more.
November 18, 2020

Content Moderators Alone Can’t Clean Up Our Toxic Internet

Leaders of popular online platforms have too often failed to adequately support moderators facing such emotionally overwhelming content.
November 18, 2020

As Children Spend More Time Online, Predators Follow

Reports of online child exploitation have risen since the start of the coronavirus pandemic.

The L1ghtning API:

Taxonomic Classification

Web Services

Learn how we combine our product philosophy with our approach to data science as we create and expose an ever growing, breathing taxonomy of Online Toxicity.

Integrating anti-toxicity AI should be as easy as integrating a payments API.

Use our structured API or customize our 90+ models to identify, flag & respond to:

Bullying, Harassment, Predatory Behavior, Hate, Self-Harm, Grooming, CSAM.

NCMEC & Law Enforcement Reporting Included.

L1ghtning Strikes

Image & Video Analysis

L1ghtning is used to analyze millions of images per month for both known & un-hashed CSAM.

Website Analysis

Millions of websites are being scanned every month for CSAM, adult and illegal content using L1ghtning.

Messaging Analysis

L1ghtning integration scans in-app messaging for abusive language and behavior.