Categories
Digital Health MedTech

Why AI in Healthcare Has a Security Problem

Every health AI model is a decision engine — and an attack surface.

The Risks (with Evidence)

  • Adversarial examples derail medical imaging AI — systematic review across radiology (European Journal of Radiology).
  • Data poisoning, inversion & extraction are recognised clinical AI risks with mitigations like audit trails and continuous monitoring (García-Gómez et al.).

Why Healthcare Is Special

  • High stakes, legacy networks, and fragile systems — the WannaCry ransomware attack disrupted NHS care at scale (UK National Audit Office).

Framework for Defence

  1. Threat modelling & asset inventory
  2. Data integrity controls
  3. Access isolation
  4. Logging & audit trails
  5. Drift monitoring
  6. Adversarial testing
  7. Rollback plan

Aligned with the EU AI Act’s high-risk obligations: risk management, logging, human oversight (European Commission).

In healthcare, AI isn’t “just software” — it’s safety-critical infrastructure.

By Piotr Wrzosinski

Piotr Wrzosinski is a Pharma and MedTech commercialization and digital marketing expert with 20+ years of experience across pharma (Roche, J&J), consulting (Accenture, IQVIA) and medical devices (BD).
He leads transformative EMEA Omnichannel Delivery Center team at Becton Dickinson and shares insights on Pharma, MedTech and Digital Health at disrupting.healthcare to speed up digital innovation in healthcare, because patients are waiting for it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.