Two watchdogs, one playbook. On 14 January 2026, the European Medicines Agency and the U.S. FDA jointly published “Guiding principles of good AI practice in drug development,” a concise set of 10 rules for using AI across the medicines lifecycle, from data collection and model building to post-market monitoring.
It’s not a binding guidance, but it’s the clearest signal yet of how EU and U.S. reviewers want AI-supported evidence generated and governed. EMA news ; FDA page; PDF
What changed (and why it matters)
• Alignment: EMA and FDA are now publicly aligned on AI “good practice.” That reduces ambiguity for EU startups planning global trials and submissions.
• Scope: The principles span the full lifecycle, not just model validation. They expect quality data, clear context of use, risk-based methods, human factors, documentation, and ongoing monitoring.
• Trajectory: EMA explicitly says this will underpin future, more detailed guidance in Europe—so aligning now is a head start on tomorrow’s rules.
The 10 principles in plain English
The FDA page lists them verbatim; here’s the founder-friendly translation you can map to your technical file.
- Human-centric by design → Keep clinicians and patients in the loop; design for usability and safety.
- Risk-based approach → Tie effort to risk: higher-risk claims need deeper controls and evidence.
- Adherence to standards → Use recognised data, quality, and interoperability standards wherever possible.
- Clear context of use → Write a tight intended use and test against exactly that.
- Multidisciplinary expertise → Put clinical, statistical, data, and regulatory people at the same table.
- Data governance & documentation → Prove data provenance, consent, representativeness, and lineage.
- Model design & development practices → Follow good ML engineering: versioning, reproducibility, and guardrails.
- Risk-based performance assessment → Validate with clinically relevant metrics, thresholds, and comparators.
- Lifecycle management → Define how you will change the model, when you’ll revalidate, and how you’ll communicate changes.
- Clear, essential information → Document what matters: assumptions, limitations, and how humans should use the outputs.
What this means for EU medtech/digital health
SaMD and clinical-decision support
Even though the document targets medicines, the expectations rhyme with MDR Annex I (safety/performance), post-market surveillance, and software lifecycle standards (e.g., IEC 62304/82304-1). If your device or DTx relies on AI to generate evidence, reviewers will look for the same hygiene: traceable data, testable claims, human factors, and change control.
Trials with AI in the loop
If you use AI for recruitment, endpoint assessment, imaging reads, or safety signal detection, your protocol should specify the AI’s role, failure modes, human oversight, and re-read plans. That creates audit-ready evidence for both EMA and national CAs/ethics committees.
Global dossiers
Consistency across the Atlantic lowers duplication. You can design one validation program that serves EU HTA and U.S. review, with local adaptations instead of two separate playbooks. FDA page (Jan 14, 2026)
Monday-morning actions
- Lock down data lineage
• Create a “Data Provenance & Representativeness” appendix: sources, consent terms, inclusion criteria, demographics, and known skews.
• Add traceability to your data pipeline (IDs, hashes, timestamps) so you can recreate every training and validation set.
Sources: FDA page (Jan 14, 2026) - Nail intended use and claims
• Write one-sentence intended use that is measurable.
• Define performance metrics and clinically meaningful thresholds tied to that use.
Sources: FDA page (Jan 14, 2026) - Predefine change management
• Draft an SOP distinguishing minor vs. major model changes, required revalidation, and notification/communication steps.
• Add versioning and a Model Bill of Materials (MBOM): training data versions, model hash, code tag, and inference stack.
Sources: FDA page (Jan 14, 2026) - Build human-in-the-loop safeguards
• Map where humans review, override, or arbitrate the AI’s outputs.
• Run a small usability study to surface error modes and handoff issues.
Sources: FDA page (Jan 14, 2026) - Plan for post-market monitoring
• Specify quality metrics you’ll track in the wild (e.g., drift signals, subgroup performance) and when you’ll trigger a recall or revalidation.
• Wire these metrics into your PMS plan and risk file so they’re not an afterthought.
Map the principles to your technical file
Create a simple table that maps each principle to an artifact in your dossier.
Example rows:
• Human-centric by design → Usability protocol + results; human oversight points in workflow; risk controls.
• Risk-based approach → Risk file linking harms to claims; test depth matched to risk.
• Standards → List of applied standards and why; any justified deviations.
• Context of use → Intended use statement; inclusion/exclusion criteria; performance targets.
• Multidisciplinary expertise → RACI for clinical, stats, data, and regulatory roles; meeting notes.
• Data governance → Data management plan; provenance report; DPA/consent library.
• Model practices → Training/validation plan; reproducibility checklist; code and model versioning.
• Performance assessment → Statistical analysis plan; benchmarks; reader studies if applicable.
• Lifecycle management → Change control SOP; MBOM; release notes.
• Clear information → IFU/labeling; limitations; known failure modes and mitigations.
Funding hook
IHI’s new applicant-driven Call 12 is open with €163m, and proposals that show robust data governance, bias testing, and real-world performance monitoring will read stronger to evaluators. Bake the 10 principles into your work plan and budget lines (e.g., for bias audits and post-market analytics).
Quick FAQ on EMA and FDA 10 AI Principles
No. It’s a set of joint principles. EMA notes it will underpin future guidance and align ongoing EU rulemaking. Treat it as strong direction, not optional reading.
Yes, if your AI helps generate evidence in trials or registries, or if the device itself needs lifecycle AI controls. The same reviewers will expect the same hygiene.
Ship the MBOM and change-management SOP. These two artifacts de-risk updates, make audits easier, and force clarity on who does what when models change.
10 AI Principles Mini-checklist (copy/paste)
• Intended use written and testable
• Data provenance + representativeness report
• Predefined change-management plan (minor vs. major)
• Validation plan tied to clinical claims
• Human-in-the-loop design + usability evidence
• Post-market monitoring metrics and triggers
• Versioning/MBOM and audit trail in place
One thing to remember
Harmonisation just got real. If you’re building AI-enabled medtech in Europe, use the EMA–FDA principles to tighten your claims, your validation, and your change control now. You’ll cut review friction later, and your dossier will look like it was written by people who read the rules.
