The Trust & Safety API
Create AI Guardrails to Curb Online Toxicity on Your Platform.
Implement brand-protection, reduce user churn, and enhance trust & safety with 90+ models that analyze Text, Photos, Videos.
L1ghtning For Web Hosting Providers
Scan your hosted sites for problematic and illegal content, images and videos, including: Adult, Hate & CSAM.
L1ght Surpasses Google, AWS & IBM on Sexual Content Image Recognition
Data Sets: Pexels, INRIA Person Dataset, Human Action Classification, Adultlabs
Our Work Saves Children.
L1ght’s analysts, engineers and data scientists continuously work with technology companies and law enforcement to identify and bring child predators to justice.
L1ght Helps Facebook Remove 120,000 Child Predators from WhatsApp
Front-page story on the Financials Times
L1ght Helps Bing Remove CSAM from Search Results
It started with an anonymous tip to TechCrunch
The L1ghtning API:
Learn how we combine our product philosophy with our approach to data science as we create and expose an ever growing, breathing taxonomy of Online Toxicity.
Integrating anti-toxicity AI should be as easy as integrating a payments API.
Use our structured API or customize our 90+ models to identify, flag & respond to:
Bullying, Harassment, Predatory Behavior, Hate, Self-Harm, Grooming, CSAM.
NCMEC & Law Enforcement Reporting Included.
Image & Video Analysis
L1ghtning is used to analyze millions of images per month for both known & un-hashed CSAM.
Millions of websites are being scanned every month for CSAM, adult and illegal content using L1ghtning.
L1ghtning integration scans in-app messaging for abusive language and behavior.