TaskUs Announces
New Strategic
Partnership &
Investment in
L1ght

Dial Your Content Moderation Spend to Match Trust & Safety Policies

Toxicity

Cost

Trusted By:

A Harmful Content Primer for Trust & Safety Professionals

As featured on

L1ght Keeps Harmful Content Out of Search Results on Brave’s Search Engine

“L1ght’s Anti-Toxicity AI helps us maintain search result integrity that keeps Brave users safe by identifying and removing online toxicity without impacting privacy.”

Josep M. Pujol, Chief of Search, Brave

World-Class Toxicity Detection

Method

Known Harmful Content

Unknown Harmful Content

Hash

10 - 30%

0%

L1ght AI

85%

85%

L1ght AI + Moderation

97%

97%

AVAILABLE NOW FOR: SOCIAL PLATFORMS, BROWSERS, SEARCH ENGINES

Contact us for integration details and to schedule a demo.

The Challenges
of Harmful Content

Protecting online communities from harmful content and behavior is a growing challenge for Trust & Safety teams.

L1ght’s content moderation technologies help to proactively ensure that harmful issues are identified in real-time.

L1ght’s moderation was designed for UGC moderation on

L1ght’s Content Moderation AI

L1ght’s content moderation AI helps Trust & Safety teams continuously monitor and analyze text, images and videos by running state-of-the-art Machine Learning micro-classifiers.

Introducing L1ghtning:
A Trust & Safety Moderation Platform

Create AI Guardrails Based on Trust & Safety Policies.

Implement brand-protection, reduce user churn, and enhance trust & safety with 99+ models that analyze Text, Photos, Videos.

L1ghtning For Web Hosting Providers

Scan your hosted sites for problematic and illegal content, images and videos.

The L1ghtning API:

Taxonomic Classification

Web Services

Learn how we combine our product philosophy with our approach to data science as we create and expose an ever growing, breathing taxonomy of Online Toxicity.

Integrating anti-toxicity AI should be as easy as integrating a payments API.

Use our structured API or customize our 90+ models to identify, flag & respond to:

Bullying, Harassment, Predatory Behavior, Hate, Self-Harm, Grooming.

NCMEC & Law Enforcement Reporting Included.

L1ghtning Strikes

Image & Video Analysis

L1ghtning is used to analyze millions of images per month for both known & un-hashed harmful content.

Website Analysis

Millions of websites are scanned every month for adult and illegal content using L1ghtning.

Messaging Analysis

L1ghtning integration scans in-app messaging for abusive language and behavior.