Derisking an AI initiative starts with lucidity. Before dreaming of models or ROI promises, a PM working on an AI feature must assess the feasibility, value, and adoption potential of every idea. Behind the current excitement around artificial intelligence often lie projects that are costly, lengthy, and uncertain. From data to change management, everything happens during discovery. In this article, Lívia Ribeiro, AI Product Manager at Thiga, shares four key dimensions to turn an AI ambition into a viable and truly useful initiative, for both the business and its users.
We are all familiar with the double impact principle: we should think not only about the business impact but also on the user’s impact. When it comes to AI products, I like to talk about a new principle: double impact/double risk. Why double risk?
AI features developments carry way more uncertainty than classic features and the costs are usually higher: data, infrastructure, models, people, and change management. In this study, published in February 2025 by the French government, we can see that the average cost of a generative AI project is €900,000 and non-generative AI (classic Machine Learning), €1.8 million per project. Time-to-value is also often long: the average development time of 21 months for GenAI and 27 months for non-generative AI.
On top of that, adoption is not guaranteed. As with any tech feature, it requires change management and often faces resistance. That’s even more the case with an AI feature, when many times there is fear or lack of understanding of AI capabilities. Because of the high cost of implementation and high levels of uncertainty, business impact weighs more than user impact when deciding to invest in an AI project. That is why it is even more important to estimate gains and de-risk an AI initiative before starting development.
When working on building Thiga’s AI PM Academy training, we identified the aspects we consider to be specific to AI products or features. In this article, I will specifically focus on AI discovery and how to de-risk AI initiatives.
So how can we concretely derisk an AI opportunity?
The 4 dimensions of AI discovery
1. The AI solution (or not)
What value does the solution bring? Does AI offer real added value compared to a traditional solution?
Some use cases are good candidates for an AI solution, others can be solved by simple automation or mathematical solutions. Start by asking whether experts can perform the task intuitively, but cannot explicitly explain how. These are often the tasks where AI adds value. For instance, a tech recruiter can identify strong candidates by experience, but defining universal rules for automatic evaluation is difficult: there would be a high risk of missing a standout candidate that does not fit a typical profile. In those cases. AI can help by learning patterns from past data to capture complex signals with high precision. In contrast, problems like supply chain optimization can be solved with classical mathematical models, and standardized tasks such as generating pay slips are better handled with simple automation.
💡If a simple rule can do the job well, AI will probably have little added value (prospect identification, logistics, price optimization, etc.)
Here are 5 use cases that are often suited for AI:
- Pattern classification and detection: categorize data and identify anomalies.
A classic example is spam detection found in every mail service, as well as anomaly and fraud detection. Laura Kici, AI PM consultant at Thiga, is currently working on image detection for a waste incineration plant, using deep learning to detect large waste and recyclable materials.
- Prediction: anticipate future behaviors or outcomes.
Churn detection, sales forecasting or dynamic pricing as found in Amazon. I personally worked on a churn detection initiative in a large B2B company, to help the sales team detect clients at risk and engage retention actions.
- Content generation: text, image, and video generation.
This is the use case that got very popular with the arrival of ChatGPT and similar tools. We have a couple of consultants at Thiga working on chatbots for customer service.
- Customization and recommendation: product and content recommendation.
We are all familiar with Netflix and Spotify recommendations and also something we like a bit less: the ads that are surprisingly exactly what we are looking for! - Automation of cognitive tasks: reading/analyzing documents, extracting information, automatic email processing.
For example, Google Cloud has a solution to automatically extract information from contracts and invoices.
We built a cheat sheet of AI capabilities you can refer to when needed:

2. The Data
When talking about learning from the past, we need data, lots of it. If you have never heard the saying “garbage in, garbage out”, you will hear it a lot when working with AI. Indeed, not only do we need large amounts of data, but we also need to ensure the data is of high quality. We must list the available data sources and analyze their quality, identify any data gaps and verify that access to this data complies with regulations.
Jean-Sébastien Pilard, PM consultant at Thiga, has worked with a retail e-commerce company that was redesigning its back office in which the employees could manage contracts, set purchase prices, and perform other tasks. After running a discovery phase, he suggested an AI system to help the sales team save time on contract management. However, the project couldn’t move forward: data were often non-existent or unreliable. Jean-Sébastien Pilard says: “Start by verifying and improving data quality. This allows us to lay the foundations before moving on to an AI solution”.
💡It’s important to know that identifying a high-value use case can justify investing in improving the quality and control of a dataset.
Tired of blindly investing in artificial intelligence? Download our free AI Product Canvas, to ask yourself all the right questions before embarking on an AI project.
3. The model
What type of model are we considering (classical ML, deep learning, GenAI, etc.)? What are its constraints (explainability, technical resources, benchmark)?
In Product Management theory, we often use the double diamond framework during the discovery phase: first we will diverge to broadly understand the user problems and needs, then converge on the main problem we want to solve. Only then will we explore the solutions again with a divergence phase in which we can brainstorm, benchmark, and prioritize the main features before prototyping and testing the solution.
When working with an AI feature, it is not straightforward to separate the problem and solution phases. We have to ask ourselves what type of model is suitable for the problem in order to assess whether we have the resources and technical capabilities. The choice of model will largely influence the complexity of the solution. The following table summarizes the main differences between a classic ML algorithm and GenAI.

Don’t wait until development to think about the model as the data science team requires precise information to train the algorithm: what is the target that we are trying to predict? What features are good candidates for input? Which KPIs are we going to follow? How do we know if the model is performing well? Is it better to have a high precision or a high recall? All these questions will influence the model choice and how the model will be trained.
💡The choice of model is not made in isolation: it must be anticipated from the discovery stage and discussed with the data scientists.
4. The adoption
Are business and technical teams ready to collaborate, test, and embrace the AI solution? What levers can be used to promote adoption and integration into practices?
As we’ve seen in the introduction, change management can be huge when it comes to AI. On one hand, this can be due to a lack of understanding of the technology, but also a lack of understanding of the business by the tech team. Make sure every stakeholder is involved from the start.
Trust in AI can also be a big barrier. How is AI seen by users? Will they trust your predictions and recommendations? Do we need to explain the predictions of the algorithm? This is where we ask ourselves about explainability, which will also influence the choice of the model.
Based on his previous experience with his e-commerce client, Jean-Sébastien Pilard advises: “I recommend testing an initial hypothesis with a POC. For example, in the case of the contract management system, AI could automatically extract data from PDFs into an interface, quickly reducing data entry errors and saving time for sales representatives. This will allow us to assess the value of an AI solution while building trust with users”
The question I keep getting as an AI PM is “but what is the difference between an AI PM and a classic PM?”, “Does the product approach change for AI, or should we de-risk AI features as we do with any other feature?”. AI initiatives can (and should) be treated as a product: the product approach is particularly well suited to reducing risks! We, however, need to take into consideration its specifics: what are the alternative non-AI solutions? Do we have the necessary data? What models could we use? What are the adoption challenges?
Once you’ve answered these questions, you will have a list of initiatives where AI is a strong candidate solution, and you are ready for the next step: ROI estimation and prioritization. That’s a topic for the next article!
If you want to know more, we provide a complete AI PM training in which you learn how to build and enhance products leveraging AI to deliver value to your users.