Mercor, a $10 billion startup that generates proprietary training data for the biggest names in AI, has confirmed a major security breach. Meta has indefinitely paused all work with the company, and OpenAI is actively investigating the incident's impact on its own data.

What Happened

The breach traces back to the TeamPCP supply-chain attack on LiteLLM, the popular open-source library for routing AI API calls. Compromised versions of LiteLLM gave attackers access to Mercor's systems, potentially exposing closely guarded details about how top AI labs train their models. An attacker claiming the Lapsus$ name has posted alleged Mercor data online, including a 200GB+ database, nearly 1TB of source code, and 3TB of video recordings from conversations between Mercor's AI systems and its contractor workforce.

Why It Matters

Mercor sits at the center of the AI industry's most sensitive supply chain. The company hires massive contractor networks to build bespoke datasets for OpenAI, Anthropic, and Meta - data these labs treat as core trade secrets. Exposure could reveal training methodologies and give competitors, including Chinese AI labs, critical insight into frontier model development.

Contractor Fallout

Mercor contractors staffed on Meta projects - including "Chordus," an initiative teaching AI models to verify responses using multiple internet sources - have been told they cannot log hours until further notice. The company is scrambling to reassign affected workers to other projects.

Mercor confirmed the attack in an internal email on March 31, calling it part of an incident that "affected thousands of organizations worldwide." The breach underscores how a single compromised open-source dependency can cascade through the AI industry's interconnected supply chain.