Doctor Using System Analysis treatment Healthcare Patient, Data Network via Cloud, Digital Healthcare with artificial intelligence

By Ramji Vasudevan

The drugs reaching the market in the coming years will increasingly be discovered, optimised, and validated by systems that many CEOs struggle to explain to their boards. Without governance frameworks that regulators can verify, innovation stalls.

IDC forecasts that Global AI investment is expected to reach $632 billion by 2028, and Gartner predicts that by 2026, more than 80 per cent of businesses will have deployed generative AI-enabled applications in production environments, up from less than 5 per cent in 2023.

But as the industry has learnt, velocity without security creates liability.

As AI gains autonomy and regulatory frameworks tighten globally, the companies that secure their AI infrastructure will outpace those that defer.

The leadership gap

McKinsey’s 2025 analysis reveals the other side of the coin. Only 1% of CEOs believe their companies have mastered AI, yet roughly one in four companies are now assigning CEO-level oversight, with some extending it to the board. This signals accountability and AI security depends on that executive ownership.

Leadership culture determines whether AI governance accelerates or constrains operations. Companies that treat security as a technical afterthought will struggle when regulatory scrutiny intensifies. Those who build it into decision-making from the start move faster. The same analysis reports that 13 per cent of companies overall have created dedicated AI governance or risk roles, signalling that this expertise is becoming as critical as expertise in drug development itself.

The gap is not technical. Executives must balance innovation velocity with model transparency, data provenance and auditability across R&D, manufacturing and commercial operations. When AI systems are built with foundational governance, compliance accelerates the deployment process. Security and speed become interdependent.

The regulatory convergence

Regulatory agencies worldwide are tightening oversight around shared principles. Transparency, reproducibility and auditability are no longer aspirational; they are essential. In January 2025, the FDA issued draft guidance requiring post market monitoring, bias mitigation and transparency for AI-enabled medical devices. This applies to any AI system involved in diagnostics, patient care or clinical decision-making.

The European Medicines Agency’s September 2024 Reflection Paper establishes that sponsors must “ensure that all algorithms, models, datasets and data processing pipelines used are fit for purpose and are in line with legal, ethical, technical, scientific and regulatory standards.” The paper warns that “new risks are introduced that need to be mitigated to ensure the safety of patients and integrity of clinical study results.” The MHRA followed in April 2024 with a strategy prioritising safety, robustness, fairness and explainability. The ICH E6(R3) update, finalised in early 2025, codified these expectations globally, requiring full data integrity and audit trail documentation for clinical trials.

These frameworks create both pressure and strategic advantage. Pharmaceutical companies that view compliance as a constraint will lag behind competitors who recognise security as the foundation for speed. Regulators are not demanding that AI innovation slows down. But they do require proof that systems function as intended, that results are reproducible and that decisions are traceable. This means demonstrating that an AI model recommending a compound for Phase II trials can explain its reasoning in terms a regulatory reviewer understands. It means proving that manufacturing parameter adjustments made by autonomous systems outside of working hours are auditable six months after the fact. Building this capability early means faster approvals, confident scaling and avoided remediation costs.

Agentic risk and future resilience

The next wave of operational risk emerges as AI systems gain autonomy. A September 2025 analysis emphasises that as agentic AI becomes embedded in critical workflows, companies must establish strong governance frameworks with clear accountability for decisions made by agents, guardrails to prevent unintended consequences and regular audits to ensure compliance. These systems do not wait for instructions to be given. They identify problems, propose solutions and execute tasks across interconnected operations. This autonomy accelerates drug discovery and clinical trial design but it also creates vulnerabilities that traditional security models cannot address.

Leaders in the next decade are already building this infrastructure. When an agentic system excludes a patient cohort or flags a safety signal, regulators will demand explanations. Security becomes the foundation that enables autonomous intelligence to operate safely at scale.

We have seen this first-hand through our own ALTi AI Adoption Lab, which provides pharmaceutical companies with secure environments to prototype and validate agentic systems before scale. Testing autonomous capabilities within frameworks designed for regulatory scrutiny from the outset proves that building systems that regulators can verify removes friction rather than adding it.

AI governance does not have to be a bureaucratic overlay. It is the quality standard of digital pharmaceutical operations. The companies succeeding in 2030 will be those that recognised early that governance unlocks speed rather than constraining it. Leadership accountability is transforming governance into a competitive advantage as regulatory convergence establishes a global baseline.

Innovation now depends on a capability boards often underestimate – explaining the intelligence that drives their pipeline.