Smile news

We rolled out AI to hundreds of developers. Here are the most surprising results a year later.

  • Date de l’événement Dec. 03 2025
  • Temps de lecture min.

For the past years, every leader I've spoken to has been grappling with the same question: What is the real, measurable impact of Generative AI on a development team? Two years ago, we decided to move past the hype and find the answer ourselves.

1. The adoption rate was near-perfect, but it wasn't magic.

After one year, our developer adoption rate for the new AI tool hit an impressive 97.14%. This figure wasn't the result of a top-down mandate or the tool simply being made available. It was the outcome of a deliberate, human-centric rollout strategy we called the "snowball effect."

This was an organizational effort, not just a technical one. It also required a constant effort with the developers, especially regarding their training. That's why we designed a 4-month program to onboard them and make them master this new way of working. We began by creating a core team of 10-15 "allies"—a mix of early adopters and key users who would champion the tool. We secured buy-in from the CIO to centrally manage and distribute licenses, which was critical for momentum. Finally, we aligned with Finance to monitor the benefits and with HR to understand the impact on employee experience and recruitment. This comprehensive approach proved that achieving near-perfect adoption is less about the technology itself and more about building a strong, cross-functional framework for change.

 

2. Senior developers reaped the biggest rewards.

One of the most common assumptions about AI coding assistants is that they primarily benefit junior developers by filling knowledge gaps. Our data showed the exact opposite. The tool's true power lies in augmenting existing expertise, making senior developers exponentially more effective.

Our study on Drupal teams provided a clear picture:

  • When working with basic templating code, junior developers achieved a respectable 15 % time savings. Senior developers, however, achieved a 30 % time saving on the exact same task.
  • The difference was even more pronounced for complex back‑end and logic‑based coding. Junior developers achieved a 25 % time saving, while senior developers saw a massive 45 % time saving.

This forced a fundamental shift in our strategy. Our hypothesis is that AI assistants don't just provide answers; they accelerate the workflow of those who already know how to ask the right questions. Seniors can more rapidly validate, reject, and refine AI-generated code, turning the tool into a true force multiplier for their existing expertise.

 

3. Expect a productivity plateau before the payoff.

Organizations expecting an immediate, hockey-stick-shaped productivity curve from day one will be disappointed. Our rollout plan revealed a nuanced adoption curve. For about the first month, developer productivity showed an initial increase, followed quickly by a plateau. This wasn't because of the "time losses by exploring the tool potential," but primarily due to a lack of change in working habits. The initial gain is mainly the result of autocomplete and code suggestion with a dash of experimentation with chat.

Developers need time to experiment, build trust, and integrate a new way of working into their habits. The true potential is realized when developers shift their mindset on how to use the tools on the day to day to embrace a new way of working. This significant behavioral shift, and the subsequent real, measurable productivity gains, only began to appear after a two-month exploring phase. We found that the full ramp-up time required for a developer to become fully proficient and meet the performance levels seen in our initial experiment was about four months. This is a critical lesson in managing expectations: leaders must budget for a learning curve, not a magic bullet.

 

4. Human bias are a bigger obstacle than technical integration.

In the process of evaluating tools, we uncovered a powerful human factor that is often overlooked: developer bias. We learned that simply asking developers which tool they prefer is not a reliable way to determine which tool is most effective. One of our most critical findings was this:

In our case, a lot of our developers love Github Copilot while it’s not making them as much productive as other tools.

This highlights the critical danger of building a technology stack based on brand loyalty or developer polls alone. Without objective performance data, organizations risk investing in tools that are merely popular, not powerful. It underscores the absolute necessity of grounding technology choices in hard metrics rather than relying solely on subjective user preference.

 

5. The ROI was huge and dhockingly efficient to manage.

The business impact of this initiative was clear and significant. In the first six months of 2025 alone, the program saved an estimated 1760 man-days of developer time across all our projects.

Even more impressively, these days were saved during the very period of our primary rollout and skill development. We generated substantial ROI not by waiting for perfection, but by embracing the learning curve itself. The entire transition—from pilot to full-scale deployment, including training, support, and monitoring—was managed with a dedicated resource equivalent of only 0.3 Full-Time Equivalent (FTE) while we had initially budgeted way more. This proves that with a well-designed strategy, the management overhead for a large-scale AI rollout can be surprisingly lean, delivering a powerful return without requiring a large, dedicated team.

6. The Invisible Foundation: the imperative of data and metrics.

All of the lessons we share, from effectiveness with seniors to massive ROI, rest on a critical foundation: the reliability of our data. 

Without rigorous data collection and clear goal definition, no generative AI project can be steered or even justified.

Thibault Milan

Innovation Director

The adoption of AI must not precede measurement strategy. To measure real impact and justify deployment costs, whether it is AI or any other technology initiative, it is imperative to:

  • Define clear KPIs and objectives: Know exactly what you want to improve (development cycle time, error rate, productivity on specific tasks, etc.) before deployment.
  • Ensure reliable data collection: Implement mechanisms to gather objective, unbiased data on tool usage and performance.
  • Justify the investment: If impact isn’t measurable, ROI remains a mere assumption. Solid data is the only currency for negotiating budgets and validating the project's sustainability.

In other words, the success of an AI project lies less in the tool itself and more in the organization's ability to answer the question: how will we prove that it works? This is a crucial prerequisite for turning a simple experiment into a strategic win.

 

Conclusion : the real goal isn't just productivity, it's proficiency.

Successfully developing a generative AI program for developers is a global strategic challenge that goes far beyond the technology itself. Our almost two-year journey has taught us that it requires a deep understanding of user psychology, a realistic approach to expectation management, and a robust organizational framework to support adoption. The goal cannot simply be to distribute licenses and hope for the best.

That is why our strategic goal is clear: we want our developers to be truly proficient in AI.

AI tools are now becoming a fundamental requirement. The only question that remains is: are you simply distributing the technology, or are you building the organizational mastery needed to excel with it?

Ready to discuss how to build AI proficiency within your development teams? Contact us today to schedule a consultation.

Thibault Milan

Thibault Milan

Directeur de l'Innovation