70% of Software Teams Say AI Is Hurting Code Quality
A new study from software testing company SmartBear surveyed 273 software quality decision-makers in January 2026 and found AI coding adoption nearly universal — but satisfaction with the results is not.
The Numbers
93% of teams surveyed have adopted AI coding tools. 40% now generate more than 41% of their code with AI, a figure expected to jump to 60% within 12 months as tools like Cursor, Claude Code, and GitHub Copilot become standard.
The problem: testing hasn't kept up. 70% of respondents say they are concerned application quality is already suffering. 60% have experienced actual quality issues in the past year from development outpacing testing capacity. 68% worry that faster AI-driven development will create testing bottlenecks they can't clear.
Despite 87% of teams having some test automation in place, 92% still test manually — suggesting existing automation pipelines weren't designed for the volume and velocity AI code generation produces.
The Confidence Gap
Perhaps the most striking finding: 65% of respondents believe their leadership doesn't fully recognize the AI testing risks. The same share reports under-investment in application-level testing.
Developers are shipping faster but accumulating quality debt they may not be able to see yet. 97% of surveyed organizations plan to increase testing investment in 2026, with 86% increasing budgets by 11% or more.
The study reinforces a pattern showing across the industry: AI accelerates output, but the responsibility for verifying that output has not been automated at the same rate.