Smile news

The integration of Generative AI in businesses is becoming vital: Our feedback

  • Date de l’événement Oct. 14 2024
  • Temps de lecture min.

Discover how Smile integrates generative AI to optimize developer workflows and transform efficiency: key insights and experience feedback.

At Smile, innovation is at the heart of our DNA. As a technology company, we are constantly seeking for solutions that not only enhance our competitiveness but also improve the efficiency and satisfaction of our teams. With this mindset, we recently explored the integration of large language models (LLMs) for our developers. This article provides our feedback (REX) on this bold initiative, the methods we employed, the results we achieved, and the lessons we learned.

 

Why did we explore the use of LLMs for developers? 

The integration of artificial intelligence (AI) technologies in software development is no longer a trend but a necessity to stay competitive. At Smile, we undertook this experiment with several objectives in mind. First of all, we wanted to prove that AI can truly transform how developers work by allowing them to save time on repetitive tasks, pushing code suggestions in real-time, and strengthening code documentation — ideally even enabling developers to write unit tests more easily. Second of all, by adopting advanced technologies, we aim to not only match but surpass our competitors in the market. It is clear that the whole industry is closely following this trend, but few companies are openly communicating about it. Everyone is waiting to see who will be the first ones to reveal their commercial strategy in light of these new tools. In the meantime, we must do all the necessary groundwork to be ready when the market sends a strong signal of acceptance for this new way of working. Finally, it was essential for us to assure our employees that we are committed to providing them with modern tools that not only ease their work but also prepare them for the future of software development. Equally important is our ability to control shadow-IT usage that might arise if we do not provide the right tools to our developers, while also supporting them and the rest of our teams in the transformation of their roles, disrupted by the LLM boom.

 

How did we conduct this experiment? 

To ensure the success of this initiative, we set up a structured, multidisciplinary organization. A task force was assembled, involving representatives from management, IT, and legal departments to address all aspects of LLM integration, including the impact on jobs and legal compliance. We began by identifying relevant use cases where AI could add value (code completion, code documentation, chatbot assistants to help with more complex tasks such as refactoring, or generally assisting with thinking processes using code as a context). Afterwards, we tested several AI solutions to find the one that best met our needs in terms of return on investment (ROI), security, and compliance. This initial phase allowed us to experiment and gather valuable data on the actual use of LLMs by our teams, both qualitative (how developers felt about the tool's impact on their work, their appreciation, "aha moments," and frustrations) and quantitative (time saved through code completion based on the developer's usual typing speed, applied across task types and technologies).

 

The results achieved 

The results of our experiment have been very promising. 

To quote only a few figures : about 80% of developers reported significant time savings thanks to the use of LLM-based tools, and 60% of them indicated they would continue using LLM tools even if it meant engaging in shadow-IT. We also saw a general improvement in time spent coding by 15% on projects concerned, averaged across all studied technologies and task types. Of course, we conducted a more detailed split for internal purposes. One of the perceived main benefits of LLMs is their ability to automate repetitive tasks, allowing developers to focus on more complex and creative activities. In a sense, we indeed observed considerable time savings enabling this, but it was not exactly on the repetitive tasks we expected. Instead, it was a series of time savings across all tasks, which cumulatively resulted in significant overall time savings. Moreover, the use of LLMs encouraged better documentation practices, either by systematically generating documentation within the proposed code or by requiring developers to write a bit of documentation to provide context and guide the tool in generating the desired code, which positively impacted the overall code quality. These results demonstrate that LLMs do not replace developers but act as facilitators, enhancing both their efficiency and the quality of their work.

 

What lessons have we learned? 

Our journey with LLMs has been rich in lessons. We have learned that while AI is powerful, it cannot do everything alone. It is crucial to keep humans in the loop to make informed decisions and supervise the suggestions made by AI. Therefore, AI functions more as an assistant to developers, which is an important point to communicate to the developer community to reassure them about the future of their jobs and to defuse the concerns raised by media claims that they will all be replaced by AI. Adopting these technologies requires continuous adaptation and close attention to user feedback. Bi-weekly follow-up sessions are important initially to help adopt these new tools. We also confirmed our assumptions about the necessity of training and support, particularly for junior developers who need time and assistance to master these new tools, even more than their more experienced counterparts. Interestingly, although junior developers have the most room for improvement, it is the more experienced developers who are quicker to understand how to leverage this new tool, formulate clearer requests, and therefore achieve better results. In terms of programming languages and frameworks used, we noticed varying quality in the suggestions, making the tool less relevant and of lower quality in its suggestions for certain technologies. We are currently exploring how fine-tuning strategies could be implemented to correct this disparity. Finally, we found that to maximize the benefits of LLMs, it is essential to continuously monitor the quality of the outputs generated and to be ready to adjust models and processes as the team needs evolve.

 

Conclusion

Integrating LLMs at Smile has been a decisive step toward innovation and improving our operational efficiency. The results we have obtained demonstrate the immense potential of these technologies to transform software development. We are committed to continuing our exploration and adoption of AI tools that not only increase our productivity but also support our employees in their professional growth. In the future, we plan to extend the use of LLMs to other areas of the company to fully leverage this revolutionary technology. We are already rolling out a secure sandbox to a small community of testers where employees can use generative AI models without worrying about client or personal data being used to train a public model or risking sensitive data leaks. We have named it SmileGPT, and we will share more details with you soon. 

At Smile, we believe that continuous innovation and investment in our teams are essential to offering cutting-edge solutions to our clients and staying at the forefront of our industry

Key Figures

If you are interested in our approach to AI integration or would like to discuss how these technologies can transform your business, feel free to contact me. I am Thibault Milan, Innovation Director at Smile, and I would be happy to exchange ideas with you on these exciting topics.

Thibaut Milan

Head of Innovation

Thibault Milan

Thibault Milan

Directeur de l'Innovation