In the early stages of building AI products, uncertainty reigns supreme. Leaders, managers, and individual contributors often grapple with defining their path forward. This uncertainty isn’t a flaw—it’s a feature of early-stage AI development. The key to success lies not in having all the answers, but in building an effective experimentation mindset across your team.
The Foundation: Defining Metrics That Matter
Before diving into experiments, you need a north star. Define metrics that you hypothesize will create leverage for your business. These metrics serve as your compass, but remember—they’re hypotheses, not gospel.
Key considerations when defining metrics:
- Focus on business outcomes (e.g., user engagement, conversion rates, revenue impact)
- Be transparent about your certainty level in metric-to-business connections
- Review and adjust metrics as you learn more about your problem space
For example, one team I recently worked with focused on reducing customer support tickets by improving their AI’s accuracy in handling common queries. Another targeted increased user engagement by reducing response latency. These business metrics provided clear direction while allowing flexibility in how to achieve them.
Prioritizing Experiments
With your metrics defined, you can now prioritize experiments based on two key factors:
- Potential impact on metrics that matter
- Required effort to execute
While perfect prediction of either factor is impossible, experienced teams can develop reliable intuition. The key is to start somewhere and refine your estimation process over time.
The real measure of success here isn’t how many experiments you run, but how quickly you’re learning about what moves your core metrics. This is where experiment velocity becomes crucial—not as the goal itself, but as a leading indicator of your team’s ability to learn and adapt quickly.
Redefining Success: From Results to Learning
Here’s a crucial mindset shift that many teams struggle with: Success in experimentation isn’t about positive results—it’s about learning. Each experiment, regardless of outcome, builds your team’s intuition about:
- Data characteristics and edge cases
- Model behavior and limitations
- Product-market fit
- Technical constraints and opportunities
As Jason Liu points out in his excellent article on running effective AI standups, “The ticket is not the feature, the ticket is the experiment, the outcome is learning.”
Knowledge Sharing: The Missing Link
Individual learning isn’t enough—teams need systematic knowledge sharing. Implement these three essential practices:
- Structured Experiment Logging
- Define clear templates for experiment documentation
- Include hypotheses, methodology, and learnings
- Document unexpected observations
- Track failed experiments with clear reasoning
- Centralized Knowledge Repository
- Create a single source of truth for experiment results
- Make it easily searchable and accessible
- Include both successes and failures
- Use simple git-based reproducibility approaches
- Regular Experiment Reviews
- Schedule weekly “DevMinute” meetings for sharing results
- Focus on learnings, not just results
- Use these sessions to identify process improvements
- Keep presentations short and focused
Real-world implementation: One team implemented weekly DevMinute meetings where team members give short presentations on their experiments—successful or not. This practice has proven invaluable for building collective knowledge and preventing repeated mistakes.
Enabling Rapid Learning
The key to accelerating your team’s learning isn’t about running more experiments—it’s about removing barriers to learning. Consider these approaches:
- Question Assumptions
- “What would have to be true to run this experiment in 1 day vs 1 week?”
- “If we had unlimited resources, how would we approach this?”
- “What’s the smallest version of this experiment that would still be valuable?”
- Remove Friction
- Start with basic data access and observability
- Focus on quick iteration over perfect processes
- Make it easy to document and share learnings
- Build Team Capability
- Help the team recognize learning opportunities
- Create psychological safety for sharing “failures”
- Celebrate insights, not just successes
The Art of Incremental Improvement
Resist the urge to revolutionize everything at once. New tools and processes require significant investment in:
- Team training
- Documentation
- Integration with existing workflows
- Behavior change
Instead, focus on methodical improvement:
- Identify the highest-impact learning bottleneck
- Implement a focused solution
- Allow time for adoption and adjustment
- Measure the impact
- Repeat
Conclusion
Building an experimentation mindset isn’t about having perfect processes—it’s about creating an environment where learning is valued, knowledge is shared, and improvement is continuous. Start with clear business metrics, redefine success around learning, share knowledge systematically, and improve methodically.
Remember: You’re not just building features as if you’re connecting a React app to a database—you’re pushing the boundaries of what’s possible as new models are developed. Your team’s capacity for effective experimentation will grow naturally from this foundation.