The status quo of securing a vast, complex ecosystem of medical devices has falls to healthcare providers, even though some of the most equipped entities are unable to solve systemic device challenges. But what if the responsibility was given back to developers?
Fortunately, recent actions and inquiries launched by the Food and Drug Administration suggest that is likely what device manufacturers should expect in the near future. From updating its cybersecurity guidance and a possible shift to requiring the inclusion of software bill of materials with each device, the approach to address medical device security is getting an overhaul.
For Mike Kijewski, MedCrypt CEO, the shift is long overdue.
Although there are outliers, most providers don’t have the skillset to address long-standing medical device issues. If the median hospital has approximately 200 beds and just two IT people, “it’s illogical” to think that organization can comprehensively secure thousands of connected endpoints, said Kijewski.
The cyberattack and outage at Scripps Health, one of the largest U.S. health systems, is a perfect real-world example of this.
“Being brought down by a ransomware attack is not necessarily indicative of poor security practices,” said Kijewski. “You can do everything in the world and bad things still happen. But if Scripps can't protect themselves, how can smaller hospitals?... They can't, and it's crazy to think that they can.”
The right approach is to take actions “today, so that five years from now, our hospitals are full of devices that have been designed with security in mind, and there will be less work for the hospitals to do to secure these things,” said Kijewski.
That shift is already taking place. Kijewski noted that a comparison of medical devices submitted within the last year and those submitted for clearance in 2017, the security of the devices is at least twice as better, if not 10-times more.
However, healthcare is “starting at a very low baseline, so there's still a lot of work to be done,” he added. Although some manufacturers are still not up to the task, as a whole, device manufacturers are definitely trending in the right direction.
“The majority of the product security leaders, their heart is in the right place and trying to do the right thing,” said Kijewski. ”Historically, it's been really difficult for these leaders to get the business leaders above them to invest in product security because it was seen as something you should do, like eating your spinach, but not required.”
“The thing that has been transformative in the last year is the FDA coming out and saying that medical device regulatory submissions have been found to be unapprovable based on cybersecurity concerns alone,” said Kijewski.
The rise in unapproved devices based on cybersecurity concerns has changed the way device manufacturers have viewed development, understanding that effective cybersecurity is necessary for obtaining regulatory approval.
Could mandatory vulnerability disclosures be on the horizon?
The 2016 FDA post-market guidance document put in place regulatory advantages for those device manufacturers that proactively communicate vulnerabilities that participate in voluntary coordinated vulnerability disclosure.
Data confirms that within the first few years of its release, disclosures rapidly increased. Some estimates show the rate of disclosure rose by 400% from 2016 to 2018.
“We want people to communicate these vulnerabilities,” said Kijewski. But the vast majority of these alerts came from just a handful of manufacturers.
“There are other device manufacturers who have never released a vulnerability disclosure, large device manufacturers with dozens of product lines,” he added. “The odds those manufacturers have never actually found a vulnerability are extremely low, or if it's true, that's an issue because everybody has vulnerabilities.”
The FDA has taken notice, in the last few years announcing that it might pursue legislative requirements for disclosure as the agency “is not satisfied with the level of voluntary disclosure.”
Kijewski, and other stakeholders, have long suggested it’s likely the right move. While a free market has its benefits, “this is a case where the market is not functioning correctly.”
“If device manufacturers are not going to voluntarily disclose this information, then it should be legislated,” said Kijewski. “I suspect there are product security people in device manufacturers who would agree.”
While most security leaders are working to better protect patient safety by securing these devices, legal teams and other higher-ups may not fully understand the risk — or the business implications for disclosing vulnerabilities when it’s not required.
“It's a rational conclusion to come to,” said Kijewski. But if the FDA requires manufacturers to release a report on detected device vulnerabilities each year, or disclose a vulnerability of a specific severity within 60 days in an alert, “it levels the playing field.”
Further, device manufacturers should use established cybersecurity vulnerability reporting mechanisms, for when a user or security researcher finds a vulnerability, as “they shouldn't have to chase somebody down to report the issue.”
One small step for providers, a giant leap for security
At CyberMed in April, the FDA’s Office of Strategic Partnerships & Technology Director Suzanne Schwartz confirmed the agency is listening to healthcare leaders' call for action, stressing that the agency was “not waiting for harm to act” and protectively improving its directives, guidance, protection, and regulation to find a solution to medical device challenges.
Kijewski stressed that recent client submissions uphold that declaration, finding the FDA is asking device manufacturers more detailed questions about device security mechanisms and potential flaws during the submission process — something it did not do in the past.
Take a pacemaker, which typically talks to a programmer. In one analogous example, Kijewski explained that in the past, the FDA might ask if the device authenticates the programmer to the pacemaker, or verifies whether the pacemaker can only talk to the right programmer.
Today, those questions may look more like requests for full descriptions of how the pacemaker encrypts data sent to the programmer, or how the keys are managed and whether the keys are kept private.
These highly scrutinous questions are a drastic shift for the FDA’s approval process and address one of the biggest challenges faced by device manufacturers: understanding there’s a gap between what the developers think they’re doing around security and what they’re actually doing.
However, those questions aren’t publicly available. The FDA could make a bigger impact if it made public some of its objections to certain medical device submissions, so every device manufacturer can really see what they're up against, which “would be really impactful.”
The reality is that, despite best intentions, manufacturers have historically overlooked common, technical elements that would make their devices more secure. As such, the approval process for devices should look similar to the way clinical trials work for treating certain conditions.
“On the safety side of things, how do we determine if the security controls we put in place are effective? Penetration testing,” said Kijewski. “Pen testing sometimes happens explicitly, where a device manufacturer plans to bring in a pen testing firm to find issues, or it can happen implicitly, where a user or a researcher contacts a device manufacturer about a possible issue.
Historically, manufacturers have been reluctant to engage with researchers because if a problem is unknown, it’s not something that must be solved. Kijewski noted that, “just like testing the efficacy of a device, if we don't allow people to sell medicines that haven’t been tested for efficacy, we shouldn't allow people to sell medical devices that haven’t been tested for safety.”
But in recent years, “the FDA has been asking for penetration test results, and a lot of buyers of medical devices, like large health systems, have been asking for pen test results,” he said.
Pen testing isn’t a perfect solution, but it’s progress. In some instances, a pen tester may find a particular vulnerability but the device manufacturer may determine it’s not urgent enough to be fixed ahead of a release, which is “a valid conclusion sometimes,” said Kijewski.
“At the very least, device manufacturers should be required to show that they engaged in a penetration testing exercise, and that some action was taken as a result,” said Kijewski. “And I bet there are a bunch of product security people at mid-sized device manufacturers that would love to see it because they've been unable to convince their legal department it’s a good idea.”
“In reality, if you're a lawyer working for a device manufacturer, your job is to minimize the risk of the organization. And if there's some optional process that you see as creating additional risk for the organization, it's a rational thing to say let's not do it,” said Kijewski. “It gets back to the idea that it's gotta be mandatory.”