SecurityBrief Canada - Technology news for CISOs & cybersecurity decision-makers
Flux result 6b89f9aa 075d 4caf 90a7 e65c687d12bc

Healthcare faces growing AI medical device security risks

Wed, 29th Apr 2026 (Today)

RunSafe Security is releasing research on cybersecurity risks linked to AI-enabled medical devices, based on a survey of more than 550 healthcare decision-makers in the US, UK and Germany.

The findings highlight a gap between the rapid adoption of AI-assisted and AI-enabled devices and healthcare organisations' ability to secure them. They also suggest that hospitals and health systems are introducing these tools into environments that already contain unsupported or unpatchable systems.

The report, 2026 Medical Device Cybersecurity Index, examines how healthcare providers are responding to a new set of threats tied to AI in clinical settings. These include model manipulation, adversarial inputs and data integrity issues, extending risk beyond conventional software flaws and device vulnerabilities.

The research summary indicates that some organisations are already deploying AI-enabled devices while acknowledging they do not fully understand or control the risks. The pattern echoes earlier waves of cloud and connected-device adoption, when implementation outpaced governance, monitoring and procurement standards.

Expanding risk

AI in medical devices creates additional points of failure because the software may depend on training data, model behaviour and input quality as well as code. In practice, security teams may need to assess not only whether a device contains a known software defect, but also whether it can be manipulated through corrupted data, misleading prompts or altered model outputs.

The research suggests existing healthcare security frameworks were not designed for these scenarios. Many were built around patch management, asset inventories and network defence for traditional IT systems, then adapted over time for connected medical equipment. AI adds another layer that can be harder to test and monitor using established methods.

The study also points to growing pressure on security and procurement teams. Healthcare organisations are beginning to include AI risk in purchasing and review processes, but standard ways to evaluate AI-enabled systems are still emerging. As a result, hospitals are balancing demand for new tools with limited guidance on how to compare products or define acceptable risk.

Legacy systems

One of the clearest concerns raised by the findings is the interaction between AI-enabled tools and older clinical infrastructure. Many healthcare environments still rely on legacy equipment that cannot be patched easily, or at all, because of regulatory, operational or vendor constraints. When new AI functions are layered onto those systems, risk can spread across connected workflows.

That matters in hospitals because devices are rarely isolated. Imaging systems, patient monitors, infusion equipment and other connected tools often sit within broader clinical networks, exchange data with electronic records and support time-sensitive decisions. A weakness in one part of that chain can affect more than a single device.

RunSafe's summary suggests defensive approaches are beginning to shift in response. Runtime protection and continuous monitoring are gaining attention as ways to secure systems that are difficult to patch or face threats evolving faster than legacy controls can adapt.

Governance gap

The broader theme of the research is that healthcare is again adopting a major technology layer before governance has fully formed around it. In earlier transitions, including the spread of cloud services and internet-connected medical devices, security teams often had to retrofit policy and control frameworks after deployment had already begun. The survey suggests AI is following a similar path.

This timing challenge is especially acute in healthcare, where procurement cycles, clinical validation, cybersecurity oversight and regulatory compliance all intersect. A device may promise gains in diagnosis, workflow or patient management, but the security questions often extend far beyond whether its code base is up to date. Buyers may also need to understand how models are trained, how outputs are validated and how anomalies are detected once systems are in use.

The findings also point to an organisational issue. Security teams are being drawn into AI decision-making, yet many still lack clear frameworks for doing so. That can make it harder to determine who is responsible for reviewing AI risk, whether IT security, biomedical engineering, procurement leaders, clinical safety teams or some combination of those groups.

Healthcare providers in the three surveyed markets may face different regulatory environments, but the underlying problem is similar: adoption is moving ahead of agreed controls. As AI-assisted functions become more common in medical devices, cybersecurity teams may have to manage not only software exposure, but also questions of model trust, data integrity and operational resilience in environments that remain heavily dependent on ageing infrastructure.

The study concludes that new defensive approaches, including runtime protection and continuous monitoring, are gaining traction where traditional patching and existing controls fall short.