AI in Discovery: new dynamics, new value drivers

  • Updated: 23 September 2025
  • 5 minutes
Article written by

Easier exploration, faster ideation and prioritization… The promises of generative AI for Product Discovery are immense. But without evolving roles and practices, those promises may fall short. In this article, Pierre Carpentier, AI expert in organizational contexts, sheds light on the levers for success and the pitfalls to avoid in order to fully seize the potential of this revolution.

In 2010, the DevOps movement popularized a simple idea: developers and operations teams shouldn’t work in sequence; they should collaborate. Fueled by technological advances, IT departments jumped on CI/CD pipelines, containerization, cloud computing, observability, and more. The promise? Development cycles ten times faster, daily releases, fewer errors, and teams finally delivering real value.

But in organizations that simply stacked tools on top of old ways of working, old problems remained: recurring incidents, chronic delays, teams under pressure. Production remained the responsibility of a few experts, ownership was passed like a hot potato, feedback loops didn’t work, and best practices stayed hidden. Without expanded responsibilities or shared vision, DevOps turned into nothing more than localized automation.

The lesson is clear: technology can wedge the door open, but only a sustainable organizational transformation - autonomous teams, redefined responsibilities, continuous upskilling - can unlock real results.

That’s exactly the challenge we face now with generative AI in Discovery. So, how do we avoid settling for shiny tools that generate noise, and instead fulfill the true promise?

Tired of blindly investing in artificial intelligence? Download our free AI Product Canvas, to ask yourself all the right questions before embarking on an AI project.

Generative AI and agents for Discovery

The Product approach is simple: cross-functional teams, close to users, delivering value fast. Two subsystems, one continuous flow:

  • Discovery: understanding needs, formulating hypotheses, imagining solutions, prioritizing, and preparing the backlog.

  • Delivery: writing code, testing, deploying, and maintaining.

For a long time, industrialization efforts focused primarily on Delivery. But now the real shift is happening upstream: generative AI shortens the distance between an idea and its design. Just like DevOps once did, the technology alone won’t bring lasting impact unless organizational models evolve alongside it.

In this article, I’ll explore what AI is really changing in Discovery, why AI agents are particularly well-suited to this stage, and under what conditions (regarding roles, collaboration, and governance) this acceleration can lead to lasting and tangible value.

Large Language Models (LLMs) bring entirely new capabilities:

  • Understanding and synthesizing: capturing the essence of heterogeneous information.

  • Structuring and standardizing: organizing and categorizing raw data.

  • Generating and exploring: producing ideas, rephrasing, or drafting scenarios.

  • Reasoning and decomposing: analyzing complex problems, identifying steps, proposing action plans.

It’s hard not to see the parallels with a Product Manager’s role. And Discovery work relies precisely on these capabilities:

  • Uncovering & Revealing: turning raw signals into value opportunities.

  • Framing & Diagnosing: defining clear problem statements and solid diagnoses.

  • Imagining & Designing: generating solutions and prototyping the most promising ones.

  • Experimenting & De-risking: testing and prototyping to validate high-risk hypotheses.

  • Arbitrating & Prioritizing: assessing impact and refining the backlog.

LLMs shorten that journey. They transform raw information into clear hypotheses and actionable paths. They act as natural amplifiers of Discovery.

Today, many PMs and Designers use their favorite GenAI tools with a curated set of prompts. But some organizations are going further - building real AI agents, connected to product context and integrated directly into team tools.

 

Concrete examples include:

  • User analysis: preparing interview scripts and analyzing transcripts.

n8n workflow user analysis

  • Product benchmarking: assessing how competitors implement similar features.
    n8n workflow benchmarking
  • Synthetic personas: generating realistic user profiles from market data.

  • User feedback: extracting qualitative insights from verified online customer feedback.
    n8n workflow user feedback
    • API exploration: scanning internal API catalogs and suggesting integrations.

    • Accelerated prototyping: linking Figma and code to rapidly bring apps to life.

    Just like automation tools industrialized Delivery (testing, quality, CI/CD), these agents are streamlining Discovery: speeding up analysis, clarifying needs, and supporting collaborative ideation.

    And with their ability to understand and generate code, they’re already blurring the lines: what once took weeks can now yield a working prototype in just a few hours.

    Agentic systems: orchestrating intelligence

    An LLM on its own is passive. An AI agent, on the other hand, combines two dimensions: understanding and acting. Connected to product data and integrated into team workflows, it fits perfectly into the fast-paced rhythm of Discovery, where we constantly move between exploration and formalization.

    We're seeing the emergence of multiple agent types: to analyze needs, generate artifacts, prioritize backlogs, build plans. Connected together and validated by humans, they can transform a raw initiative into a prioritized backlog. Just as CI/CD streamlined the path from code to deployment, agentic systems can streamline the path from idea to solution.

    AI agents: the real challenge is (still) human

    Rolling out an army of agents isn’t enough. As with DevOps, real acceleration only becomes sustainable if we rethink everyday collaboration. Who makes decisions? Who validates outcomes? How is responsibility shared for the results?

    As long as these rules remain implicit, agents will remain isolated gadgets. But when integrated into operational models, they can dramatically boost a team’s ability to explore, learn, and create.

    That leads to the key question: how will roles evolve?
    Will Scrum Masters, Product Ops, User Researchers, Designers become architects of agent networks (an “Agentic Mesh”) able to configure AI like they would prepare a workshop?

    The impacts on Agile frameworks

    Agile frameworks will need to incorporate these new patterns of human-agent collaboration, just as they once did with DevOps. Agents don’t just create new roles—they demand new types of interaction:

    • Human-in-the-loop: the team sets the rules and validates outputs before they reach the backlog. The key is determining the right level of autonomy.

    • 24/7 Agent: some agents run continuously and assign tasks to humans. Guardrails are needed to avoid overload.

    • Feedback loop: all agent interactions must be tracked. This data feeds retrospectives to improve prompts, context, and rules.

    Gradually, the human + agent tandem becomes a unit of work in its own right. Just like the developer + ops duo became a standard, this new partnership is establishing itself.

    Product teams will need to learn how to operate in sync with these new companions - otherwise, their organizational models may quickly start to show cracks.

    A self-contained squad delivering a User Story every two weeks might soon look outdated if it fails to integrate these new dynamics. Because without an organizational overhaul, agentic systems don’t create value: they mostly generate noise, and sometimes even new cognitive debt.

    Beware of false promises

    The shift to generative AI mirrors the logic behind DevOps: train teams, assess processes, tackle pain points, clarify expectations - all within agile governance, with the right level of delegation. As before, the expected gains are the same (lead time, efficiency, quality), but this time we must also track human-centered metrics: adoption, trust, cognitive load, evolving roles.

    But be cautious with claims of increased efficiency. Cutting a testing campaign from two weeks to two hours didn’t eliminate testers’ work. It shifted it toward script maintenance. Unstable CI/CD pipelines still haunt developers. With AI agents, the risks are similar: increased dependency, faster propagation of errors, new cognitive load. That’s why we need AI observability, just like we needed monitoring for DevOps.

    If DevOps industrialized Delivery by combining automation with cultural transformation, agentic systems aim to industrialize Discovery on the same equation. But the real winners won’t be those who just activate an AI plugin. They’ll be the ones who redefine roles, responsibilities, and metrics to fully harness the human-agent tandem.

    This shift is less visible than a “Push to Prod” button, but far more profound: it reshapes how ideas are born, selected, and turned into reality. As with DevOps, technology is opening the door; now we need to invent the collaboration models between humans and agents.

    Where to start?

  • Learn: understand the capabilities and limitations of GenAI.

  • Embed Agents: bring them into Discovery to solve real pain points.

  • Set Up Agile Governance: with clear responsibilities and adapted metrics.

Get trained in Generative AI with Thiga Academy!

Artificial Intelligence is transforming the role of the Product Manager. Our training program helps you (and your teams) practically integrate AI into your products to create more value for your users.

What’s in the program:

– Understand the fundamentals of AI (ML, NLP, generative AI, LLMs…) and key use cases
– Identify, prioritize, and design high-impact AI-powered features
– Master the basics of data management: collection, cleaning, annotation, and governance
– Scope, test, and deploy an AI model - from MVP to industrialization
– Balance technical performance, business value, and user experience
– Work efficiently with Data Engineers, Scientists, and Analysts
– Integrate the principles of responsible, ethical, and regulated AI

Designed for professionals with 2–3 years of experience, comfortable with both Discovery and Delivery. Ready to turn AI into a true value lever?
cover_pm-1

La newsletter Product Management.

Contenus exclusifs, actualités, humeurs, devenez incollables en Produit