news

ICMRA set out recommendations for AI regulation in medicine

4
SHARES

The ICRMA has set out recommendations to help regulators tackle the challenges posed by artificial intelligence (AI) in medicine.

Doctor using AI

The International Coalition of Medicines Regulatory Authorities (ICMRA) has published a report containing recommendations to help regulators address the challenges that the use of artificial intelligence (AI) poses for global medicines regulation.

 

SECURE YOUR FREE SPOT

 


Gain insight about the changes to United States Pharmacopeia (USP) General Chapters 41 and 1251 on balance requirements for quality control.

Webinar | 4 March 2026 | 3 PM

What will be discussed:

  • Mandatory essentials of USP General Chapter 41 -calibration, minimum weight, repeatability and accuracy​ requirements, and performance checks
  • Informational statements of USP General Chapter 1251 – the concept of a safety factor
  • Performance checks – general requirements

Our speaker will address specific USP-related questions in a Q&A format at the end of the webinar.

Register now – it’s free

In the report, the definition of AI includes various technologies, such as statistical models, diverse algorithms and self-modifying systems. These technologies, it states, are increasingly being applied across all stages of a medicine’s lifecycle from pre-clinical development, to clinical trial data recording and analysis, to pharmacovigilance and clinical use optimisation.

According to the ICMRA, the range of applications brings numerous regulatory challenges, including the transparency of algorithms and their meaning, as well as the risks of AI failures and the wider impact these would have on AI uptake in medicine development and patients’ health.

The report identifies key issues linked to the regulation of future therapies using AI and makes specific recommendations for regulators and stakeholders involved in medicine development to foster the uptake of AI. Some of the main findings and recommendations include:

  • Regulators may need to apply a risk-based approach to assessing and regulating AI, which could be informed through exchange and collaboration in ICMRA;
  • Sponsors, developers and pharmaceutical companies should establish strengthened governance structures to oversee algorithms and AI deployments that are closely linked to the benefit/risk of a medicinal product;
  • Regulatory guidelines for AI development, validation and use with medicinal products should be developed in areas such as data provenance, reliability, transparency and understandability, pharmacovigilance, and real-world monitoring of patient functioning.

The report is based on a horizon-scanning exercise in AI, conducted by the ICMRA Informal Network for Innovation working group and led by the European Medicines Agency (EMA). The goal of the network is to identify challenging topics for medicine regulators, to explore the suitability of existing regulatory frameworks and to develop recommendations to adapt regulatory systems in order to facilitate safe and timely access to innovative medicines.

The report can be found here.

Share via
Share via