A recent study by the Model Evaluation and Threat Research (METR) organization has uncovered surprising results regarding the impact of AI coding tools on the productivity of experienced developers working on mature open-source projects.
Contrary to expectations, the research found that productivity dropped by 19%, even though participants believed AI had accelerated their work by 20%.
The study employed a rigorous methodology, involving 16 developers from prominent open-source projects who tackled 246 real-world tasks. These tasks were randomly assigned to either an "AI-allowed" or "AI-disallowed" category. The projects, averaging over 10 years old, contained more than a million lines of code, providing a complex and representative environment for testing.
Key issues emerged during the experiment. AI-generated code often failed to meet the "high standards" of these mature projects, leading developers to spend significant time reviewing and correcting outputs — only 39% of AI suggestions were accepted.
Frequently, developers had to rewrite code entirely after multiple unsuccessful AI attempts, negating potential time savings. The study suggests that modern AI tools excel with small, well-defined, "greenfield" projects but struggle with large codebases requiring deep contextual understanding and implicit project knowledge.
A critical finding is the discrepancy between perception and reality: despite working 19% slower with AI, developers subjectively felt a 20% speedup. This raises doubts about the reliability of self-reported data in many AI tool effectiveness reports, highlighting the need for controlled experiments to assess true impact.
However, an alternative explanation emerges. Experienced developers may have demanded code revisions when AI outputs didn’t align with their preferred style or perceived efficiency, even if the code functioned correctly.
This behavior could imply that, in real-world scenarios, these developers might naturally expend 39% more time and effort on tasks than necessary — without AI, they might be slower still, while with AI, a perceived 20% speedup was offset by a 19% net slowdown.
Also read:
- Selling Social Status – The Quiet Rise of Credibility-as-a-Service
- TikTok Develops a U.S.-Specific App as Part of a High-Stakes Transition
- Wimbledon 2025: Sweat, AI, and a Fairytale in White
This nuanced insight challenges the narrative around AI’s universal productivity benefits and underscores the importance of context in its application. The full study details are available at the source, offering a deeper look into this evolving debate.
[Note: The link provided (https://secondthoughts.ai/p/ai-coding-slowdown) was used as a reference for this article.]

