Every health AI model is a decision engine — and an attack surface.
The Risks (with Evidence)
- Adversarial examples derail medical imaging AI — systematic review across radiology (European Journal of Radiology).
- Data poisoning, inversion & extraction are recognised clinical AI risks with mitigations like audit trails and continuous monitoring (García-Gómez et al.).
Why Healthcare Is Special
- High stakes, legacy networks, and fragile systems — the WannaCry ransomware attack disrupted NHS care at scale (UK National Audit Office).
Framework for Defence
- Threat modelling & asset inventory
- Data integrity controls
- Access isolation
- Logging & audit trails
- Drift monitoring
- Adversarial testing
- Rollback plan
Aligned with the EU AI Act’s high-risk obligations: risk management, logging, human oversight (European Commission).
In healthcare, AI isn’t “just software” — it’s safety-critical infrastructure.