How to make sure your AI products get used

  • Updated: 02 June 2025
  • 5 minutes
Article written by

From fraud detection to churn prediction, artificial intelligence has been solving real problems for years. That doesn’t stop many AI products and features from being abandoned. They may do their job perfectly… but users simply don’t understand them. Lívia Ribeiro, Data and AI Product Manager for Thiga, shares in this article what she sees as the key to AI adoption: explainability.

AI conversations today are dominated by ChatGPT, Gemini, Mistral, and other large language models. But AI has been quietly solving real-world problems for over a decade: think fraud detection, demand forecasting, churn prediction. However, many of these machine learning tools fail to gain adoption. 

Not because they don’t work, but because people don’t trust what they can’t explain. I’ve seen it firsthand, in a large organization: a powerful model, built with care, ignored by users who couldn’t make sense of it. Maybe the real challenge isn’t just performance: it’s explainability.

Explainability in AI refers to the ability to understand and interpret how an algorithm makes its decisions. It goes beyond transparency and interpretability, according to Simran Singh, AI product Designer at Thiga, it’s a user experience principle: “Explainability is a new UX principle. You have to understand who your users are and how aware they are. What do they need : reassurance? Transparency? The challenge is to make it understandable and approachable, so it doesn’t feel incomprehensible.”

The real challenge, then, is: how do we build models that are both performant and humanly understandable? 

When do we need explainablity? 

Explainability isn’t always required. If you're recommending a movie or automating a low-risk task like sending a marketing email, users don’t need to understand exactly how the algorithm made its decision. In these cases, performance and personalization matter more than transparency. But in other contexts, involving high stakes, sensitive data, or human decision-making, explainability becomes essential, and sometimes legally required.

1. Regulatory requirements

In certain industries, explainability isn’t optional. For example, in credit scoring, loan approvals, or insurance, legal frameworks like the EU AI Act require that algorithmic decisions be explainable. If an AI system can’t provide clear reasoning, it may be blocked from deployment entirely.

2. Decision-support use cases

When AI is used to assist, rather than automate, human decision-making, like in medical diagnoses, sales forecasting, or customer churn prediction, explainability is necessary to build trust. Business users need to understand why a recommendation is made so they can confidently act on it, challenge it when needed, and justify their decisions to others.

3. Bias detection and fairness

Opaque models make it harder to detect and correct for algorithmic bias. Explainable models can help reveal whether certain inputs are influencing predictions, allowing teams to address fairness concerns proactively.

4. Model monitoring and adaptability 

Explainable models are easier to troubleshoot and update. When behaviors change, due to seasonality, market shifts, or data drift, interpretable models make it easier to identify what’s going wrong and iterate quickly.

The next question is: how do we actually build AI systems that are both explainable and effective?

How to build explainable AI systems

Balancing explainability and performance is no easy task, especially in the context of GenAI: explainability is a whole new research topic. 

As Dario Amodei, CEO of Anthropic, puts it in his article The Urgency of Interpretability: “Modern generative AI systems are opaque in a way that fundamentally differs from traditional software. When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does—why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate.”

But for more traditional machine learning models, particularly those built in-house, there are actionable best practices that can help teams bridge the gap between technical performance and user trust.

1. Bringing Product Culture to Data Science

Unlike traditional software products, AI solutions introduce an additional layer of uncertainty: users must trust predictions that are probabilistic by nature. A single false positive can compromise trust, especially when users don't understand why the model made a mistake.

That’s why data science teams need a Product mindset. When they work in isolation from users and the business, the result is often high-performing models with low adoption. Luc Marczack, Lead Data Scientist at Saint-Gobain, explains : “Data Scientists and Product Managers need to work side by side. The PM models the process, the DS handles modeling and data checks: it’s a synergy.”

This is where the AI Product Manager plays a role: to act as the bridge between data science, business teams, and end users, ensuring that AI solutions are not just technically powerful, but also usable, understandable, and aligned with business needs. However, AI discovery isn’t the same as traditional Product Discovery. Your end users may have little or no technical background, and your data scientists may be disconnected from operational workflows. The key is to test early and simply.

2. Think MVP

Instead of jumping straight into complex models, you can start with a simple rule-based algorithm or a simple linear regression model. Clear, explainable rules help build initial trust and engage users early on. Plus, when users participate from the beginning, they gain confidence in the approach and feel a sense of ownership over the solution.

In a churn prediction project for a large B2B company where I worked, initially, a complex ML model XGBoost, was developed but it was not used by the sales team. After interviewing users, we uncovered four main issues:

  1. The model didn’t reflect business reality.
  2. It wasn’t well integrated into the sales team’s workflow.
  3. The model felt like a black box (and in fact, it was, due to the complexity of XGBoost) making it hard for users to trust its output.

We pivoted to a simpler, rules-based approach and involved users from the start. This allowed us to rebuild trust and iterate based on real feedback.

Luc Marczack had a similar experience: “In a project I worked with, the team initially deployed a highly complex model without ever testing a simple baseline. They spent considerable time developing and deploying the solution, but performance ended up being disappointing. The team finally decided to revert to a simple average based prediction, meaning simply giving the average in a given context and time as the prediction. While its performance was slightly lower than the complex model, it was easy to understand and already delivered measurable value.”

This MVP way of thinking is also a necessary step to define what is a good performance. That is why we always should start with a baseline algorithm.

3. Use baseline algorithms

As in the previous example, sometimes a simple algorithm is enough to provide value while keeping results interpretable. The key question is: where do we draw the line between improving performance and sacrificing adoption due to lack of explainability?

Explainability can be intrinsic, built into the model itself or post-hoc, added afterward. Some models, like linear regression, simple decision trees or generalized linear regression (GLR) are inherently interpretable, while other complex models like gradient boosting machines (GBM) or deep learning models require additional techniques (e.g., SHAP values, LIME) to to provide insight into their predictions.

Starting with a baseline model provides a clear, explainable foundation. You can always scale to more complex solutions later, if the business value justifies it. Luc Marczack, summarizes it well: “A foundational principle in data science is to start simple. You only move to something more complex if you can clearly demonstrate that it delivers greater value.”

4. Design for explainability

Once you’ve deployed a simple model, test it in real-world conditions:

  • Do users understand and trust the results?
  • Do they actually use the predictions?
  • What questions or doubts do they raise?

Explainability should be built into the user experience, not something added on later. As Simran Singh, AI Product Designer at Thiga, puts it: “It has to be part of the user journey: not just pop-ups that come out of nowhere. For an AI feature I am working with, that takes a long loading time, I plan to use animations that say ‘analyzing your request,’ or ‘processing your data,’ to set expectations. We also use tooltips in onboarding: ‘You can ask about XYZ, and we’ll use these data sources to generate answers.’”

As we have seen, explainability isn’t always necessary. For use cases like email automation or text summarization, users don’t expect to understand exactly how the algorithm works. But as AI systems are used to support decisions or operate in sensitive contexts, explainability becomes critical.

With generative AI, everything seems simple and seamless from a user’s point of view and these models are increasingly embedded in every aspect of our personal and professional lives. But behind the scenes, these algorithms remain incredibly complex and often nearly impossible to explain. It’s a challenging area of ongoing research, and one that risks being overlooked.

In order to ensure AI remains a tool people trust and understand as it advances, it must evolve not just technically, but also in terms of explainability. The models will keep getting more complex. We need to make sure the experience doesn’t.

Do you want to learn how to master AI and make the most of it? Sign up for our AI Product Manager Training.
cover_pm-1

La newsletter Product Management.

Contenus exclusifs, actualités, humeurs, devenez incollables en Produit