AI-powered coding tools (Copilot, Cursor, Claude Code, etc.) are no longer an “optional toy” for software teams, but a part of the fundamental toolset.
A comprehensive report published recently reveals an important truth by analyzing the data of tens of thousands of developers worldwide: AI enables teams to ship more code; however, code quality does not rise at the same speed. (AI Native Dev)
Speed: Developers are indeed getting more work done
In measurements conducted by the developer analytics company DX across more than 135,000 developers and hundreds of companies, the usage rate of AI coding tools has risen above 90%. (AI Native Dev) The vast majority of developers state that they use these tools regularly and report time savings of between 3–4 hours per week on average; this figure is approximately double that of the previous year. (AI Native Dev)
This is not just perception. The same study shows that developers who use AI tools daily merge approximately 60% more pull requests compared to those who never use these tools (median 2.3 PRs vs. 1.4 PRs). (AI Native Dev) In short, AI-backed teams are “launching more ships.”
So, how much of this code is truly written by AI?
We frequently see claims in the media like “50% of our company’s code is now written by AI.” Executives at some large technology companies say that 30–50% of newly produced code is AI-contributed. (AI Native Dev)
However, DX’s data paints a more cautious picture: In a sample where more than 34,000 developers were examined, only 22% of merged code is considered to be “written by AI.” Even among those who use AI every day, this rate only rises to 24%. (AI Native Dev)
In other words, AI is a powerful assistant in the development process, but not an “autopilot” that writes code from scratch on its own. The human touch is still decisive.
The Quality Side: Mixed, even contradictory signals
The real critical question is here: How does this acceleration reflect on software quality?
The quality metrics examined by DX in the same report — indicators such as Change Failure Rate, confidence in changes, and code maintainability — do not offer a clear and unidirectional improvement. While some teams produce fewer errors and more maintainable code as they increase AI usage, an effect in the exact opposite direction is observed in some teams. For most companies, the change is within a narrow band and statistically small. (AI Native Dev)
In Summary:
- Throughput increases significantly,
- Quality displays a “mixed” picture that varies from company to company.
Among the fundamental reasons for this are factors such as training quality, how teams position AI, the complexity of the code base, and the maturity of test and release processes. (AI Native Dev) Just purchasing a license is not sufficient; where and how AI is integrated into the process becomes decisive.
Practical implications for companies
Looking at this picture, several fundamental principles stand out for institutions when managing AI investments:
- Manage “impact,” not “adoption.” Do not track how many people use AI; track the change in metrics such as error rate, turnaround time, customer complaints, and rework rate.
- Pull AI into design and test processes. AI should be used not just to have code written, but also to generate test scenarios, refactor code, and find risky areas. Thus, your chance of maintaining quality while speed increases rises.
- Strengthen the code review culture. Clearly define that code written with AI must also pass through code review just like code written by human hands — perhaps even with a bit more discipline.
- Take training and “enablement” seriously. Structured trainings, usage guides, example repositories, and clear usage policies are the key to managing speed gains without turning them into quality losses.
- Create a security and governance framework. Which AI tools can be used with which data, in which scenarios they are prohibited, and how logs will be kept must be clear.
Conclusion: AI brings speed, we determine the quality
The data tells us this: AI tangibly increases the productivity of development teams. We write more code in less time, merge PRs faster, and make new starters productive more quickly. (AI Native Dev)
However, there is no automatic guarantee of improvement on the code quality, sustainability, and security side. That part is still determined by; team culture, architectural choices, test strategies, and how the company manages AI.
Therefore, perhaps the correct sentence is this: AI accelerates developers; however, it does not take the place of good engineering practice, it amplifies it.
What will create the real difference on the corporate side is not saying “We use AI”; it will be embedding AI into our development processes in a measurable, manageable, and secure way.








