ad
ad

AI code kinda sucks (says study)... How to write better code with AI

Science & Technology


Introduction

In a recent analysis conducted by a code analysis firm, findings have raised eyebrows regarding the impact of AI coding assistants like GitHub Copilot and Claude on developer productivity and code quality. Many developers claim that these tools enhance their productivity; however, the study revealed no significant benefits when measuring key programming metrics. This article aims to delve into the findings and experiences around AI coding assistants and offer insights on effectively leveraging AI to improve coding practices.

Key Points from the Study

The study evaluated the productivity of approximately 800 developers who had adopted GitHub Copilot over a three-month period, comparing their outputs to their performance before using the tool. While developers expressed feelings of increased efficiency using coding assistants, the metrics showed no significant gain. Productivity measurements included:

  • Pull Request (PR) Cycle Times: The time it takes to merge code into repositories.
  • Number of Pull Requests Merged: This does not necessarily equate to productivity; merging more PRs could indicate a higher error rate.

Interestingly, the study noted that the use of GitHub Copilot led to a staggering 41% increase in bugs. This raises questions about the reliability of AI-generated code and whether it introduces more issues than it resolves. Another surprising aspect was that developers reported no improvements in managing burnout, and in some cases, reliance on AI for coding may contribute to increased stress.

The Role of Developer Experience

A crucial aspect of these findings is the experience level of developers. Junior engineers may lean heavily on AI tools without fully understanding the underlying code, which can result in submitting flawed PRs. Conversely, experienced developers often utilize AI to draft high-level code rapidly but take on the responsibility of reviewing and ensuring the accuracy of the generated outputs.

For instance, an indie hacker might find greater utility from AI tools by generating specific code snippets or solving isolated problems, as opposed to a developer in a large organization with complex codebases. This discrepancy might explain the lack of meaningful productivity gains in the study.

Insights on Maximizing AI Use in Coding

To optimize the use of AI in coding, developers can employ the following strategies:

  1. Use Short, Specific Prompts: Rather than asking AI to generate extensive parts of code, focus on specific functionalities. This allows for easier review and minimizes the chances of introducing bugs.

  2. Understand the Generated Code: Developers should invest time in understanding the code produced by AI. This not only helps in debugging but also enhances overall coding skills.

  3. Iterative Development with AI: Break down projects into manageable segments and utilize AI to assist in each specific part. This makes it easier to track changes, manage errors, and maintain code quality.

  4. Embrace AI as a Tool, Not a Replacement: While AI can handle repetitive tasks, it should supplement human expertise rather than replace critical thinking and problem-solving skills inherent in development.

Conclusion

While AI coding assistants promise substantial productivity improvements, this study has uncovered that expectations may not always align with reality. Developers should exercise critical thinking and be mindful of the limitations of AI-generated code. By combining AI tools with sound coding practices, developers can enhance productivity while maintaining code quality.


Keywords

AI coding assistants, GitHub Copilot, Claude, productivity, pull request cycle time, code quality, bugs, developer experience, burnout.


FAQ

What did the study find about the productivity of developers using AI coding assistants?
The study found no significant improvements in productivity for developers using AI coding assistants like GitHub Copilot. It noted an increase in bug rates attributed to AI-generated code.

How did the study measure productivity?
Productivity was measured by analyzing pull request cycle times and the number of pull requests merged over a three-month period before and after the adoption of AI coding assistants.

Why might developers experience more bugs when using AI tools?
The study indicated that reliance on AI can lead to a higher introduction of bugs, possibly because developers may trust the AI-generated code without sufficient review or understanding.

Are junior developers more affected by AI coding assistants than senior developers?
Yes, junior developers may rely more on AI tools and lack the foundational understanding, which can lead to submitting flawed code. In contrast, senior developers typically leverage AI for drafting specific code while ensuring thorough review.

How can developers make better use of AI coding assistants?
To make better use of AI, developers should use focused prompts, understand the generated code, adopt an iterative approach to development, and view AI as a supplementary tool rather than a complete replacement for their skills.