AI Slop Is Breaking Open Source โ Tech Giants Pledge $12.5M to Fix It
AI coding tools have quietly become one of open source's biggest problems โ and the companies that built them are now paying to clean up the damage.
cURL's Bug Bounty Breaks Down
In January 2026, Daniel Stenberg, creator of the ubiquitous cURL library used by billions of devices worldwide, shut down the project's bug bounty program on HackerOne. The reason wasn't funding โ it was volume. By mid-2025, less than 5% of submitted reports were legitimate. The rest were what Stenberg bluntly called "AI slop": long, confident, and entirely fabricated vulnerability reports generated by LLMs.
At FOSDEM 2026 in Brussels, Stenberg went further: "The open source ecosystem is being DDoSed by AI."
cURL isn't alone. Terminal emulator Ghostty banned all AI-generated contributions outright. Across major repositories, PR volume has jumped roughly 40%, while merge rates have fallen โ maintainers now spend more time rejecting generated noise than reviewing real work.
The Bigger Picture
The problem runs deeper than spam. AI coding agents consume open source libraries at industrial scale but skip the engagement loop that has always sustained maintainers: documentation visits, genuine bug reports, community presence. Tailwind CSS saw npm downloads climb while documentation traffic dropped 40% and revenue fell 80%.
$12.5M Response
On March 17, 2026, the Linux Foundation announced $12.5 million in grants from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI. The funding will flow through Alpha-Omega and the Open Source Security Foundation (OpenSSF), aimed at helping maintainers triage the surge in AI-generated security reports.
Greg Kroah-Hartman, Linux kernel maintainer, noted that money alone won't solve it โ what's needed is active tooling support for the humans still keeping critical infrastructure alive.
The funding is a start. Whether it's enough to keep the ecosystem running is a different question.