Summary

The transformative power of AI lies not in productivity increases, but in enabling entirely new scientific breakthroughs – particularly in research and development. Microsoft Vice President David Carmona explains why R&D is the core field for AI innovation and how specialized AI agents support scientists in tackling complex problems. The critical point: While AI can generate millions of candidates, scaling fails without strategy, culture, technology, and regulation. Leaders must shift focus from short-term efficiency gains to long-term breakthroughs – while maintaining transparency and human expertise.

People

Topics

  • AI-driven drug discovery
  • Change management in organizations
  • AI regulation and explainability
  • R&D scaling

Clarus Lead

AI in research and development works completely differently from AI in classical productivity processes. David Carmona from Microsoft emphasizes: The real value lies not in doing previous work faster, but in creating new solutions that would be impossible without AI. In drug discovery, AI systems can generate millions of molecular candidates – far more than laboratory capacity exists for worldwide – but only a handful actually reaches the market. This massive filtering movement requires not just technology, but a complete redesign of strategy, culture, and governance.


Clarus Original Research

  • Clarus Research: The podcast documents for the first time the four concrete success factors for AI scaling in R&D directly from a Microsoft executive – strategy, culture, technology, and regulation – a framework that extends beyond life sciences and applies to energy, materials science, chemistry, and manufacturing.

  • Classification: The central paradox: While AI hype focuses on automation and cost savings, the most profound breakthroughs emerge in complex, regulated sectors where trust, traceability, and human expertise are not optional but prerequisites.

  • Consequence: Organizations that pilot AI R&D in isolation within centers of excellence will fail. Success requires company-wide culture change, where every R&D employee must master AI – combined with centralized governance for compliance and quality control.


Detailed Summary

The Wrong Focus: Productivity Instead of Breakthrough

Most current AI discussions revolve around efficiency gains: How can professionals complete their work faster? Carmona warns against this perspective for R&D. The transformative power lies in discovering entirely new things that would be impossible without AI. In pharmaceuticals and materials science, the paradox is evident: AI systems can generate millions of drug candidates or molecular combinations. But the reality is brutal: Perhaps two to three candidates per year make it into clinical trials, and even fewer reach the market – at the same or similar costs as today.

This discrepancy makes clear that the problem is not idea generation, but rather intelligent filtering, prioritization, and validation through simulation, experimentation, and analysis. Here AI goes deeper: Specialized AI agents can accelerate hypothesis-driven screening processes, simulate thousands of experiments before real lab work begins, and identify patterns in scientific data that humans would miss.

The End-to-End R&D Ecosystem: Orchestra Rather Than Solo Pilot

Carmona describes AI not as a single tool, but as an orchestrator of an entire scientific workflow. The process:

  1. Hypothesis Generation: AI analyzes scientific literature and internal data repositories to identify connections in complex problem domains.
  2. Simulation & Experimentation: Specialized models conduct virtual tests, prioritizing the most promising candidates.
  3. Analysis & Learning: AI evaluates results and informs the next iteration.
  4. Human at Center: Scientists steer, validate, and make critical decisions, supported by transparent, traceable AI recommendations.

This differs fundamentally from scenarios where individual researchers use an AI copilot. Here the entire R&D process is redefined – workflow, roles, metrics, governance.

The Regulation Monument: Explainability as Foundation

One of Microsoft Discovery's biggest lessons, Microsoft's platform for AI-driven R&D: Black-box AI doesn't work in regulated industries. The FDA, EMA, and similar authorities demand not just results, but also complete traceability of every decision.

This means concretely:

  • AI systems must be able to explain why they recommend certain molecules, hypotheses, or components.
  • Every step must be traceable: Who decided, based on which data and which model?
  • Humans must remain actively in the loop with the ability to question and override decisions.

Carmona emphasizes: Without this transparency, it's practically impossible for researchers and regulators to trust AI recommendations. This is not a nice-to-have for compliance – it is the gating factor for adoption in the first place.

Four Pillars for Successful AI R&D Scaling

Carmona proposes a framework that goes beyond individual innovations:

1. Strategy: Balancing Long-term + Short-term

  • Moonshot Projects: Investments in entirely new discoveries (e.g., new drug classes).
  • Cognitive Augmentation: AI tools for individual scientists to leverage existing capacity.
  • Productivity Enhancement: Automation of routine tasks.

All three must run in parallel. Critical: They must be connected. Short-term projects should lead to moonshots; moonshots should translate into measurable incremental wins.

2. Culture: People Before Technology

The most common scaling error: AI is seen as a technical problem. In reality, culture change management is the primary hurdle. This requires:

  • Clear Communication: What is the target culture? (Not just "more AI," but e.g., "data-driven hypothesis generation," "experimental mindset," "error tolerance in learning").
  • Broad Upskilling: Every R&D role – from technician to leadership – must understand AI fundamentals and apply AI tools.
  • Empowerment: Teams need the means and motivation to participate in change, not just receive top-down mandates.

Microsoft's Finance department was a beacon: Through a combination of leadership buy-in, mandatory training for all, and bottom-up use case development, AI was quickly integrated across functions.

3. Technology: Democratization Rather Than Isolation

Many companies build an AI center of excellence that pilots concepts. This fails. Instead required is:

  • Broad Access: Every R&D employee should easily access specialized AI tools without being a data science expert.
  • Platform Rather Than Point Solution: An integrative infrastructure supporting multiple use cases – not specialized tools for individual teams.
  • Integration with Processes: Technology must be embedded in existing workflows, not act as an additional layer.

4. Governance & Regulation: Control Centrally, Execute Decentrally

Here lies the tension: While execution can be distributed, oversight must be central:

  • Centralized Audit Structure: How are AI decisions reviewed? Are they GxP-compliant (Good Practice)?
  • Measurable Metrics: Not just "less time," but "how many validated candidates reach Phase II?", "what error rates in predictions?"
  • Compliance-by-Design: Governance must be built into the platform, not layered on afterwards.

Key Messages

  • AI value in R&D is not incremental efficiency, but entirely new scientific possibilities. While AI can generate millions of candidates, intelligent filtering through simulation and analysis is the real differentiator.

  • Specialized AI agents orchestrate entire workflows: Hypothesis generation → Simulation → Experimentation → Analysis, always with humans at the center.

  • Transparency and traceability are not optional compliance features – they are prerequisites for trust and adoption in regulated industries.

  • Four pillars for successful scaling: Strategy (balance moonshots + incremental), Culture (upskilling + empowerment + communication), Technology (democratization + integration), and Governance (central audit + measurable metrics).

  • The biggest scaling error: Piloting AI R&D in isolation within centers of excellence instead of changing company-wide culture.


Stakeholders & Affected Parties

StakeholderRole / Impact
R&D Leaders & ScientistsMust develop new skills, redefine processes, build trust in AI recommendations – while maintaining responsibility for regulatory compliance.
C-Suite / CEOBears responsibility for company-wide culture change; success or failure of AI R&D depends heavily on their top-down commitment.
Regulators (FDA, EMA, etc.)Must develop new standards for AI validation, explainability, and audit in drug discovery.
Patients / MarketBenefit from faster, more precise drug and material discovery – or wait longer if scaling fails.
Data Science / IT TeamsMust build platforms that are easily accessible, not just powerful for experts.

Opportunities & Risks

OpportunitiesRisks
Drugs for rare diseases: AI can identify candidates classical methods miss.Regulatory blockade: Authorities might reject AI validation if standards are missing.
Accelerated materials development: Energy, chemistry, semiconductors benefit from faster simulation.Culture clash: Employee resistance to process changes can stall scaling.
Reduced experimental costs: Fewer physical tests through better pre-simulation.Bias in AI models: If training data isn't representative, systematic errors can emerge.
New revenue lines: Organizations can invent entirely new product categories.Talent scarcity: R&D employees with AI skills are scarce and expensive.
Early competitive advantage: First movers with established AI R&D culture dominate faster.Over-automation: Risk of displacing human expertise before AI is truly ready.

Action Relevance

For R&D Leaders (next 3–6 months):

  1. Clarify Strategy: Define your three AI investment levels (moonshot / cognitive augmentation / productivity). Are they connected?
  2. Culture Audit: What cultural attributes do you need for AI success? (Data-driven, experimental, error-tolerant) Are they present today?
  3. Communicate Quick Wins: Pilot 2–3 use cases showing short-term success to build momentum.

For C-Suite / CEO:

  1. Establish Culture Mandate: Make explicit the target culture and allocate resources for change management – not just technology investment.
  2. Set Integration Goals: Define how AI R&D connects to business strategy; avoid isolated pilot mentality.
  3. Ensure Governance Framework: Work with Legal and Compliance to develop early standards for AI transparency and audit before scale.

For Regulators:

  1. Develop Standards: Establish clear requirements for AI explainability and validation in regulated domains.
  2. Engage Industry: Co-develop frameworks with companies and scientists; avoid prescriptive rules that stifle innovation.

Long-term Outlook

The next 2–3 years will separate winners from losers in AI R&D. Organizations that treat AI as a technology problem will plateau; those that treat it as a culture and strategy challenge will compound advantages exponentially. The moonshot mindset – investing in long-term breakthroughs while maintaining short-term discipline – combined with transparent, explainable AI, will define the next generation of scientific innovation.