L1ght Keeps CSAM Out of Search Results on Brave’s Search Engine

“L1ght’s Anti-Toxicity AI helps us maintain search result integrity that keeps Brave users safe by identifying and removing online toxicity without impacting privacy.”
Josep M. Pujol, Chief of Search, Brave
Keeping search results free of toxic material is challenging:
Ensuring search results are free of toxic material requires search providers to maintain an updated set of toxic search terms. Using manually curated lists based on user reports simply does not work.
The rate and velocity with which this material is created and tagged, requires an automated ML-based approach that is able to identify and correlate new toxic terms in near real time.
Over the past months Brave has leveraged L1ght’s Anti-Toxicity AI to detect toxic URLs, hashtags, images, and text, for a full spectrum of toxicity issues, including: CSAM, NSFW and hate.
With L1ght, Brave is able to block toxic material with more precision and coverage, rather than institute broad content policies — which is contrary to Brave’s view of an open and uncensored Web.
Here’s how L1ght helps ensure search result integrity:
Here’s how L1ght helps ensure search result integrity:
First, L1ght prepares batches of terms based on specifications determined by Brave, for example, filter levels and country-specific policies for the US, UK, Germany, France, and Canada.
L1ght then performs both autocomplete extractions, as well as queries against Brave’s search API. Finally, L1ght assesses these results with its Anti-Toxicity classifiers to determine whether toxic material such as URLs and images should be flagged for Brave to omit from its search results.
To illustrate scale and efficacy, L1ght has conducted over 9,000,000 such classifications, resulting in ~1% being flagged as toxic and designated for removal.
Brave is a leading web browser and search engine that puts privacy first.
CHALLENGE
Ensuring search results are free of toxic material while upholding Brave’s view of an open and uncensored Web
SOLUTION
L1ght’s Anti-Toxicity AI to detect CSAM, NSFW and hate by country-specific policies
STATS
HEADQUARTERS
San Francisco, California