Fast Finance

Health systems adopting AI face growing cyberthreats, according to study

Why new security approaches are needed

November 4, 2024 4:39 pm
A recent study of real-world attacks on GenAI identified risks for organizations adopting the technology.

Health systems are pushing to adopt AI in a range of patient-facing, clinical and back-office functions. But the new technology can increase cyberthreats, says emerging research.

Attacks on 2,000 generative AI (GenAI) models succeed 20% of the time in performing so-called jailbreak attacks, according to recent research by Pillar Security, a GenAI security firm. Jailbreak attacks instruct models to ignore their safeguards, such those preventing the release of sensitive data. The GenAI programs studied were being used to operate chatbots of numerous companies, including some in healthcare, said Dor Sarig, CEO and co-founder of Pillar Security.

Other findings from the study of attacks on GenAI in a recent three-month period found:

  • 90% of successful attacks resulted in the leakage of sensitive data
  • Attacks take an average of 42 seconds
  • An average of five interactions with GenAI applications were needed to complete a successful attack

“The main vulnerability that we identified was related to sensitive data leakage, which came in the form of leaking the instructions or the system prompt of the application,” Sarig said.

“Where it gets interesting and also related to healthcare is where the meta prompts provided to the model include sensitive data or patient data,” he said, about clinical data included in the program’s training.

Healthcare organizations are increasingly adopting GenAI models to innovate and improve efficiency. GenAI is used in applications like chatbots, ambient medical documentation, and decision-making systems.

New vulnerabilities

However, GenAI’s transformative capabilities also create new vulnerabilities from previous IT systems. The models are trained on large datasets and without sufficient safeguards they are vulnerable to attacks like prompt injection. Attackers use malicious inputs — often appearing as benign requests — to change the model’s responses, manipulate data or even extract confidential information embedded in its training data.

The vulnerability in GenAi comes when healthcare organizations’ use protected patient data as part of the large language models (LLMs) that train GenAI. Such computer algorithms process natural language inputs and predict the next word based on what it’s already seen. But the GenAI retains such data, such as examples of patient data used to help them summarize conversations. he said.

“What we see in attacks in the wild [actual attacks on operating GenAI programs] is sometimes those fields that accept [inputs] contain sensitive data that is leaking into other sessions with different patients,” Sarig said. “So, you have a cross-user leak.”

GenAI vulnerability extends beyond public facing applications and includes business-to-business applications — like hospitals’ communication with payer —and even internal business uses of GenAI, he said.

Key issues needed to be addressed with GenAI, he said, include where the data is stored, who can access it and whether the application being built can be manipulated to leak sensitive data to those without permission.

Protecting the newest IT tool is a critical financial priority for healthcare organizations. Healthcare is a frequent target of cybercriminals, although its rank among industries in attack volume depends on which study you read. The average cost of a healthcare data breach in 2024 was $9.77 million, which was a decrease from $10.93 million in 2023, according to the latest report from IBM and Ponemon Institute. However, healthcare is still the most expensive industry for data breaches, a position it has held since 2011, according to the report.

Stopping attacks

Traditional IT security measures often fall short in detecting and mitigating attacks targeting GenAI, he said.

“AI is [a] completely new kind of software and the way you build it is completely new. So, security, of course, has to be changed completely,” Sarig said. “All of the security controls as we know them will have to be either adapted to this new attack surface or they will be no longer relevant.”

The fundamental difference from previous IT applications is that AI applications are no longer powered by code but instead by prompts.

“They are not logic based. They are goal-driven, data-driven,” Sarig said. “The most important difference is that these are not deterministic software. It’s not, ‘If this, then that,’ the way you build algorithms. This requires a completely new approach to security.”

Protecting GenAI applications requires mapping the “attack surface” of the application as it is being integrated into an organization, which is done by using other AI programs to attack it at the testing stages.

“So, we use AI to test AI,” he said. “We set up a copy of the application and run a bunch of different scenarios and attack simulations just like an adversary would do. We’re trying to find all of the vulnerabilities and gaps to make the application more resilient to them.”

AI security systems also are needed on an ongoing basis to anticipate and respond to emerging threats in real-time, while supporting their governance and cyber policies, according to Jason Harrison, chief revenue officer of Pillar Security.

Other threats

Another type of attack Pillar has seen uses malicious inputs to hijack a GenAI program to use in an attack on others. Attackers do that because LLMs are very expensive to operate and require large amounts of computing power.

“You build a chatbot that’s responsible for handling claims but suddenly an attacker is able to exit that chatbot mode and use the LLM for their own purposes,” he said.

Healthcare organizations also need to watch for employees using GenAI applications on their own. Sarig has seen cases where employees use ChatGPT and other tools to summarize patient data and then opt out of the program using their data to train it. But the information remained in their personal account’s session history with the GenAI program.

“All the attacker needs to do is get access to your ChatGPT account and suddenly they have access to all of sensitive prompts and responses you got from” using that GenAI tool, he said.

There is one piece of good news because GenAI does not yet appear to add to the largest and growing technology-related threat category for health systems: ransomware.

Advertisements

googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text1' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text2' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text3' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text4' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text5' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text6' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-text7' ); } );
googletag.cmd.push( function () { googletag.display( 'hfma-gpt-leaderboard' ); } );