Cost killing, satisfaction boost, and full-scale automation: generative AI promises to reinvent customer service. But scaling such a solution without compromising brand reputation is a whole different story. In this article, Claudio Torres, Principal Product Manager Data & AI at Thiga Spain, shares the behind-the-scenes of deploying a 100% Gen-AI chatbot in a global retail group, from first prototype to tens of million conversations handled without a single incident.
“Impossible. Too risky. Too expensive.” That was the reaction I got when I first proposed replacing our old chatbot with a generative AI solution.
At the time, I was a Senior Product Manager at one of the world’s biggest retail and retail groups. Multiple e-commerce brands, over a hundred markets, millions of customers every day. The stakes were enormous. My focus was customer service, a sensitive and costly area where improvements could make a real difference. But letting AI interact directly with customers came with serious reputational risks.
One year later, the chatbot we built now handles tens of millions of conversations annually and has been deployed across every brand and market in the group. Costs have dropped, customer satisfaction has risen, and most importantly, not a single incident has occurred.
Here’s how we proved that generative AI can be industrialized at scale in a large organization.
From skepticism to strategic bet
When I joined the group three years ago as Senior Product Manager, my mission was clear: improving the digital experience for multiple e-commerce brands operating in over 100 countries.
Customer service quickly stood out as a key pain point. Every small improvement had a direct impact on both cost and customer retention. But the system we had in place was built on an aging CRM and a rules-based chatbot that was expensive, inefficient and frustrating for users. Automation rate was at market average at the time, and costs kept rising.
As generative AI models started emerging, I proposed testing them. The initial answer was a hard no. Legal risks, GDPR, and potential damage to the brand were all considered too high. Leaders were clear: the risk of a chatbot saying something inappropriate was unacceptable.
Then came the media explosion around ChatGPT. In just a few months, leadership changed course. The company now aimed to become “AI-powered” and every department was asked to explore potential use cases, including customer service. The opportunity I had been waiting for was finally here.
Tired of blindly investing in artificial intelligence? Download our free AI Product Canvas, to ask yourself all the right questions before embarking on an AI project.
Building the chatbot
Instead of going with a hybrid solution, which every major player considered "the way" to automation, I bet everything on a chatbot powered entirely by a large language model. At that time, no scaled implementation existed anywhere. Not at Salesforce, not at Google, not even at Amazon. We had no model to follow.
At first, the team was just two of us: one engineer and me. My job was to define the vision, convince stakeholders, and take responsibility if it failed. We soon brought in a part-time data scientist. For seven months, we worked non-stop, determined to be the first to launch this kind of chatbot in a major company.
This wasn’t about pride. Deploying disruptive technology at scale in a large group is more than a customer service project. It’s a strategic move that reshapes governance, budget priorities and the company roadmap. With this initiative, AI finally shifted from an experimental demo to a critical part of operations.
A bulletproof MVP
From day one, we were working under three strict conditions:
- Reputation: zero tolerance for risk. A single inappropriate response and the bot would be shut down immediately.
- Cost: the new chatbot couldn’t cost more than the old one. Since it wouldn’t have made sense to develop a solution that was more expensive than before, this was non-negotiable.
- Performance: the chatbot had to automate more conversations than the previous system.
To meet those goals, our MVP had to be bulletproof. With new technology, it’s tempting to move fast. We did the opposite. For almost three weeks, we didn’t build anything. Instead, we spent hours studying how ChatGPT behaved. We bombarded it with offensive, tricky and ambiguous prompts in multiple languages to test its limits and see if it could generate harmful responses.
This systematic approach allowed us to handle more than 40 languages with no major incidents through a single pipeline instead of maintaining dozens of different versions, which allowed us to keep costs at bay.
Once the chatbot was ready, we released it internally with one simple instruction: try to break it. Despite everyone’s best efforts, the chatbot held strong. We just needed one final external test before launching it publicly. To this day, there hasn’t been a single incident. It worked.
Rolling it out across the group
Stabilizing the MVP was only the first step. We then had to adapt it to the group’s different brands, each with its own identity and processes.
After a successful launch, complexity grew exponentially. Our next challenge was scaling the solution across multiple brands, markets, and languages, and doing it fast. Finding a simple, maintainable approach while upskilling the operations team was the final step in achieving the transformation we had envisioned. For the first time, the group now had a centralized, scalable way to manage its Customer Care Virtual Assistant.
Managing internal politics
If the technical part was hard, the internal politics were much harder.
As the customer service team, we didn’t have direct access to production infrastructure. We depended on the platform teams for servers, API keys and compute power. These were limited resources, and assigning them to our project meant taking them from another. Getting priority took a lot of negotiation.
We were also quickly faced with a budget constraint: we weren’t allowed to run both the old and the new chatbot at the same time. The old system would be turned off the day the new one launched. That meant no fallback.
The internal environment was tense. Several AI pilots were happening at once. While some teams supported us, competition was still tough. At the time when this article is being written, ours is still the only project that made it to production.
There’s one lesson I took away that every executive should remember. Never underestimate how much your organization will resist disruption.
Looking back
The results speak for themselves. Costs are far lower than before. Operations have been simplified to the point where no specialized roles are needed anymore. Customer automation nearly doubled up, from previous solutions. And over more than two years, there hasn’t been a single reputational issue.
What started as a risky gamble is now the backbone of our customer service. Today, many colleagues wonder how we ever did without it. Looking back, I see three key lessons for anyone looking to scale generative AI:
-
Set clear non-negotiables. For us, it was cost, risk and performance. Without those, the project would have been buried in compromises.
-
Invest as much in governance as in technology. Winning allies, navigating internal politics and securing resources is just as important as technical execution.
-
Design for scale from the start. Keeping the bigger picture in mind helped us go from prototype to global platform in record time. For any executive, the message is clear. Innovation only matters if it’s designed to scale quickly and sustainably.
The real win wasn’t that we “did AI.” It’s that we turned customer service into a strategic asset: more efficient, faster, more satisfying for customers and built for global scale. In a world where every company claims to be AI-powered, the real differentiator isn’t the tech. It’s the ability to industrialize it.
This chatbot isn’t just a better support tool. It’s repositioned customer service as a core part of the business. Reducing costs while increasing loyalty means turning a traditional cost center into a strategic asset. That’s why AI, when used properly, isn’t a gimmick. It’s a foundational part of the company’s infrastructure.
The success of this project had less to do with the model itself than with the alignment between business and tech. Clear governance, strong business priorities like satisfaction, ROI and scalability, and disciplined execution made all the difference. That alignment, more than any technical brilliance, is what determines whether a company can survive and grow in the AI era.
Retail is transforming, rethinking its value streams. Circularity, omnichannel, artificial intelligence… If you’re exploring how digital can enable these new models, our Retail experts can help you identify the right use cases, focus your priorities, and transform your operations. Feel free to reach out!