Is Healthcare Ready for AI Cyber Threats?

Is Healthcare Ready for AI Cyber Threats?

We’re joined today by James Maitland, a leading expert in medical robotics and IoT applications, whose work focuses on integrating advanced technology safely and effectively into healthcare. With the explosion of generative AI, the healthcare sector faces a dual reality: unprecedented opportunity for efficiency and a dramatically expanded cyber-attack surface. We’ll explore how to navigate this new terrain, discussing the critical need for AI verification, the emergence of new security disciplines like “red teaming,” and the architectural shifts required to defend against AI-powered threats that operate at lightning speed.

Given the difficulty in detecting whether an AI’s error is a simple hallucination or a malicious manipulation, what specific steps can a hospital take? Could you detail how they might verify the provenance of everything from the AI model itself to its underlying training data?

This is exactly where my concern for the future lies. Today, we’re primarily just talking to AI systems, but we’re on the cusp of an era where we’ll be watching them perform actions independently. The core challenge is that it’s already hard to spot when an AI is simply wrong, let alone when that error has been deliberately introduced by a malicious actor. Distinguishing between a hallucination and a targeted manipulation to achieve a specific, harmful outcome will become incredibly difficult. Therefore, organizations must demand complete provenance for everything. This means having a clear, unbroken chain of evidence from the model being served to you, through the technology it was built with, all the way back to the provider of the model and the specific data used to train it. True visibility is going to be absolutely critical for managing these new risks. We need practices like cryptographic binary signing, which allows us to know with certainty where the code running a model originated. A hospital should be able to trace every single record that went into a model’s training, from its birth to its death, to have any hope of trusting its outputs.

New security disciplines like AI “red teaming” are emerging to test system limits. What does building an effective red team look like in a healthcare setting, and how can organizations cultivate the unique blend of clinical, technical, and risk-assessment skills needed for robust AI governance?

We’re definitely seeing the birth of entirely new security disciplines, and AI red teaming is a major one. In a healthcare context, this means deploying a dedicated team of specialists whose entire job is to try and make these AI systems break. They actively attempt to provoke the models into producing harmful content or taking unintended actions. It’s a systematic process to test the absolute limits of the safeguards and to evaluate how well the models have been trained for their specific purpose. Alongside this, AI governance is becoming paramount. You need a combination of skills that is quite rare today. It’s not enough to have a great engineer or a great doctor; you need individuals who possess both of those mindsets and can also put on a risk-assessment hat. These professionals must understand the technical intricacies of how AI is built, the clinical context of its application, and the regulatory landscape to accurately determine what constitutes an important risk. We are going to see the rise of highly specialized governance professionals, perhaps even down to a specific clinical practice, to fill this crucial gap.

As cybercriminals use AI to launch attacks at lightning speed, the response time for defense shrinks from hours to seconds. Beyond user IDs, how can hospitals build a more sophisticated identity management system for humans and machines, and what architectural changes are needed to achieve this speed?

The game has fundamentally changed in two ways: identity and speed. I don’t believe the average hospital thinks about identity much beyond a user ID and password to let people in the door. That has to change. Identity will be the foundational digital artifact that ties everything together, for humans, for machines, and for autonomous AI agents. It’s the bedrock upon which we will apply all other security controls. The second piece is speed. Weaponized AI systems, especially those trained for a specific malicious task, move at a velocity humans simply can’t match. We’re no longer in a world where you have an hour to contain a ransomware infection; you now have seconds, if not milliseconds. If you can’t operate at that same speed to detect and eliminate the threat, you’re going to be in serious trouble. The strategy has to be a top-to-bottom architectural review. Look at your systems and applications and ask: how quickly can we deploy a new control? How fast can we rebuild a system from scratch or deploy a critical patch? You must ruthlessly identify and eliminate every bottleneck that prevents you from detecting an event and taking action on it almost instantaneously.

What is your forecast for the evolution of AI’s role in healthcare cybersecurity over the next five years?

Over the next five years, I foresee a dramatic and necessary escalation on both sides of the cybersecurity battlefield, with AI at the center. We will see the rise of truly “agentic” AI in healthcare, moving from advisory roles to performing active tasks, which will exponentially increase the potential for damage if compromised. In response, cybersecurity will have to become AI-driven by necessity. The human-in-the-loop will shift from frontline defense to a strategic, supervisory role, overseeing automated systems that detect and respond to threats in milliseconds. Foundational security concepts like identity will be completely reimagined to manage not just people but a complex ecosystem of devices, models, and agents. Most importantly, new roles like AI risk governance specialists and AI red teamers will become standard, integral parts of any hospital’s security team, ensuring that as we innovate with one hand, we are building resilient, trustworthy systems with the other. The future isn’t just about better algorithms; it’s about building a new human-machine security architecture from the ground up.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later