About Us

How It All Started

Back in 2018 we noticed our growing kids were spending more time gaming and consuming social content online. As we put on our “responsible parents” goggles, we soon identified that a major gap existed between harmful online behavior our kids were exposed to, and the technologies that can protect from it.

As we formed our founding team — purposefully with a heavy dollop of cybersecurity backgrounds — we realized that the company’s technological aspirations had the potential not only to create safer online environments for children, but for adults as well. That’s when the company formalized its simple, naive, yet instructive mission:

Make the Internet safer and more inclusive for all — irrespective of age, race, sexual orientation, or affiliation. 

As we began to build out our technologies and pilot them with customers and law enforcement authorities, it became clear that rudimentary “privacy” settings and pure human moderation were no match for the exponential growth and scale of UGC across social platforms and products. Harmful content, as well as the ability of threat-actors, bullies and predators to flourish in such online environments, were making these increasingly unsafe — particularly for younger audiences.

Modern Trust & Safety solutions must be more than simple moderation solutions. Rather, they must be sophisticated AI behavioral analysis and monitoring systems with  cybersecurity sensibilities baked in.  

As we began rolling-out products that help identify harmful content and behavior across text, images and video, for companies that put an emphasis on safe environments for their users and customers, L1ght’s expertise became clear: We were becoming world-class experts in identifying, analyzing and translating harmful human behavior into code.

Today, with a team of 25, a couple of rounds of financing, and a lot (and we mean A LOT) of R&D, L1ght is making it easy and sensible to add moderation AI to any site, service, and app.

Put simply, where there are people engaging with each other, we want to help ensure there are guardrails that keep it all safe, and free of toxic behavior

Reclaiming the Internet by making it a safe place for all is not a modest mission, but we believe our team is making a real dent.

Our Specs

The Company Was Founded


Patents Pending


Employees Working Tirelessly


Funding Raised

$ 0 M

Based Out Of



0 +

Child Predators Removed From WhatsApp


Members of These Associations

Meet Our Team


Avner Sakal

AntiToxin Headshots _

Ron Porat

Founder & CTO
Roi Carthy

Roi Carthy

AntiToxin _

Yaakov Schwartzman

Head of Innovation
Yuval Cohen (1)

Yuval Cohen

Head of R&D
AntiToxin _-6

Dorit Zilberbrand, PhD

Head of Data Science

Mira Maman


Caroline Fernandes

Head of Sales

Udi Porat

Director of Customer Service & Support
AntiToxin Headshots _-4

Doron Habshush

Head of Research
rinat kiperman (1)

Rinat Kiperman

Analysts Team Lead

Shai Levi

Senior Developer
AntiToxin Hezi_

Hezi Stern

Head of Product
L1ght Headshots AE_-10 (1) (1)

Eitan Brown

Data Scientist
AntiToxin Headshots _-8

Alon Gur

Innovation Developer
Antitoxin השלמה_

Guy Heller

Office Manager
Shira Reuveny

Shira Reuveny


Oren Yanay

QA & Automation Developer
Ilana Kalnitsky

Ilana Kalnitsky

Data Analyst

Keren Guezentsvey

Data Analyst
Yoav Landau (3)

Yoav Landau

Data Analyst
maya cabel (2)

Maya Cabel

Data Analyst

Regina Tservil

Research & Data Analyst

Osnat Rein

Data Analyst
AntiToxin Headshots _-6

Imbar Cohen

Data Analyst
Noga Mindlin, Data Analyst

Noga Mindlin

Data Analyst
WhatsApp Image 2020-11-22 at 15

Yonatan Hacohen