Medical Device Security Offers Proving Ground for Cybersecurity Action
Legislation approved on June 8 by the U.S. House of Representatives to address the cybersecurity of medical devices may offer a good model for a sector-by-sector approach to cybersecurity regulation. The bill illustrates the complexities and balancing act required of regulatory efforts in this space. In particular, the measure suggests a way to solve a conundrum at the center of cybersecurity policy: how to translate a general statutory or common law mandate to provide “reasonable” security into specific, technically sound controls. At the same time, it raises questions about how to ensure that such controls are prioritized, adaptable, and enforced.
Medical Device Security in FDA User Fees Bill
The bill is H.R. 7667, Food and Drug Amendments of 2022. The core of the bipartisan measure is a set of provisions reauthorizing the various user fee programs that provide funding for the Food and Drug Administration (FDA) to speed up drug and device approvals—the “users” in this context being drug and medical device developers and manufacturers whose products require FDA approval. The FDA’s authority to collect user fees expires on Sept. 30. The programs’ continuation is a high priority for both industry and the FDA, so the bill is considered a must pass, making it an attractive vehicle for towing other policy initiatives across the legislative finish line.
When the House Committee on Energy and Commerce marked up H.R. 7667 on May 18, it added language requiring the FDA to take steps to improve the security of medical devices. On June 8, the full House approved the amended bill by a vote of 392 to 28. In the Senate, the draft bill to reauthorize FDA user fees does not yet contain a security provision, but Sens. Bill Cassidy and Tammy Baldwin have introduced a freestanding medical device security bill with language comparable to that now in the House user fee measure.
The key language in the House bill states that any person who submits to the FDA a premarket submission for a “cyber device” “shall include such information as the Secretary [of Health and Human Services] may require to ensure that the cyber device meets such cybersecurity requirements as the Secretary determines to be appropriate to demonstrate a reasonable assurance of safety and effectiveness.” “Cyber device” is defined as “a device that (A) includes software, including software as or in a device; (B) has the ability to connect to the internet; or (C) contains any such technological characteristics that could be vulnerable to cybersecurity threats.” The bill specifies that, at a minimum, the manufacturer of a device shall:
- Have a plan to appropriately monitor, identify, and address in a reasonable time postmarket cybersecurity vulnerabilities and exploits, including coordinated vulnerability disclosure and procedures.
- Design, develop, and maintain processes and procedures to ensure the device and related systems are cybersecure.
- Make available updates and patches to the cyber device and related systems throughout the life cycle of the device.
- Provide in the labeling of the cyber device a software bill of materials.
- Comply with such other requirements as the secretary may require to demonstrate reasonable assurance of the safety and effectiveness of the device for purposes of cybersecurity.
I have argued that the FDA’s authority to ensure the safety and effectiveness of medical devices already encompasses cybersecurity. But the agency has resisted imposing any requirements on the ground, articulated most recently in its fiscal year 2023 budget request, that “there is no statutory requirement (pre- or post-market) that expressly requires medical device manufacturers to address cybersecurity.” In the budget request, the FDA asked for legislation granting it express authority to address device cybersecurity. Congress responded with remarkable alacrity.
The Balancing Act of Cybersecurity Regulation
The relatively simple language in the House bill offers a good model for cybersecurity legislation. The bill fits cybersecurity into an existing regulatory framework, using the FDA’s premarket submission process as the vehicle for compliance and situating violations of the cybersecurity requirements within the Food, Drug, and Cosmetic Act (FD&C Act), including its provisions on adulteration and misbranding. It sets out some minimum security requirements while leaving the full definition of adequate cybersecurity to the agency to define within the scope of its expertise.
But the bill’s simplicity and its deference to detail-setting by a sector-specific agency that does not specialize in cybersecurity also surface the challenges of defining cybersecurity requirements across the diversity of the economy. Under any regulatory regime, industry craves both certainty and flexibility, both stability and adaptability. Businesses seek clear rules but often resist a one-size-fits-all approach. Enforcement and liability must be predictable, but rules must also change to keep pace with evolving threats and changing technology. And regulation should at best encourage innovation, while at the very least avoiding impeding it.
In the cybersecurity field, these tensions are acute, as the underlying technology of the information society changes rapidly (think of the lightening speed of the shift to cloud computing), as the threat evolves (exemplified by the rise of nation-state attackers and the emergence of ransomware as a major problem for entities large and small), and as notions of what is minimally required expand (consider the elevation of multifactor authentication). For the FDA, the urgent need to swiftly develop and bring to market lifesaving medical devices, highlighted by the coronavirus pandemic, further compounds the task. The fact that some security measures may make a device harder to operate and thus less effective drives an even more complex cost-benefit analysis.
Defining “Reasonable” Cybersecurity
Under the bill, it will be left to the FDA to define “such cybersecurity requirements as the Secretary [through the FDA] determines to be appropriate to demonstrate a reasonable assurance of safety and effectiveness.” The agency’s starting place will undoubtedly be the draft guidance it issued on April 8: “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions.” The guidance, like all other FDA pronouncements on cybersecurity to date, was intended to be nonbinding, but it would well serve as the basis for implementing the device security language in the House bill, should it pass.
The latest draft guidance is impressive. Its length alone (49 pages) testifies to the FDA’s growing sophistication regarding cybersecurity: It replaces a never-finalized 2018 draft that was 24 pages long and is intended to supersede 2014 guidance that is only 9 pages.
The new version, like the 2014 guidance and the 2018 draft, stresses that security is a matter of design that must be considered from the outset of a product’s development. (The word “design” appears 110 times in the new draft.) Building on references in the earlier versions, the new guidance highlights the need to address cybersecurity risks and mitigations throughout a product’s life cycle. Like the 2018 version, for example, the new draft states that devices should be designed to facilitate the rapid patching and updating of deployed devices. (Patching was mentioned once in the 2014 version.) As in the 2018 draft, the new version emphasizes the theme of transparency, finding an obligation to inform device users of relevant risks and security information in the section of the FD&C Act that requires accurate labeling and adequate directions for use. And like the 2018 version, the 2022 draft includes a long list of security controls, mostly oriented toward outcomes, such as “Provide mechanisms for verifying the authenticity of information originating from the device” and “Implement design features that allow for security compromises and suspected compromise attempts to be detected, recognized, logged, timed, and acted upon during normal use.”
The document has evolved in notable ways. The new draft drops the 2018 version’s reliance on the National Institute of Standards and Technology (NIST) Cybersecurity Framework and instead introduces a new concept, Secure Product Development Framework (SPDF), as a way to achieve total product life-cycle considerations. (The guidance might benefit from referencing the recent NIST work on secure software development across the product life cycle.) The new version also drops an earlier two-tiered approach (“higher” and “standard”) to defining cybersecurity risk. It advances the use of multiple “architecture views” to communicate the threat model for a device from the perspective of different concerns. Under this approach, device developers in their premarket submissions would describe a product’s security from, at a minimum, a global system view, a multi-patient harm view, an updateability/patchability view, and one or more security use case views. The new version replaces the 2018 concept of a cybersecurity bill of materials with a software bill of materials (SBOM), more narrowly defined to exclude hardware. According to the new draft, manufacturers are expected to document all software components of a device and to mitigate risks associated with those components, and the SBOM is one tool to identify and track those components and thereby help manage supply chain risk. And, consistent with broader developments in cybersecurity, the new draft guidance gives more attention to vulnerability management, including the establishment of a coordinated vulnerability disclosure process and procedures for the communication of remediations, patches, and updates to customers.
All in all, the draft guidance goes a long way to addressing a key challenge of cybersecurity: how to define what is reasonable cybersecurity. But it also surfaces the complexity of the task. To begin with, the entire document is framed as recommendations. The document doesn’t say that developers must use the core concept of the Secure Product Development Framework. Instead it says that the SPDF “may be one way” to satisfy quality system requirements. It doesn’t say that device developers must have a plan for remediation of vulnerabilities; it says that the “FDA recommends that manufacturers establish a plan.” It doesn’t say that developers must inform users of cybersecurity risks; it says that “informing users of relevant security information may be an effective way to comply with labeling requirements” (emphasis added). Even patchability is recommended, not required.
Of course, this binary framing—recommended versus required—misses an important nuance: In practice, device makers view such guidance as laying out the FDA’s evaluation criteria, so even draft guidance must be taken seriously. But exactly how the guidance is to be applied is something to be discussed and essentially negotiated in communications between the developer (or the developer’s attorneys) and FDA staff. Those communications can begin very early in the approval process (even presubmission in some contexts). A key focus of those discussions is understanding the cybersecurity risk posed by a given device (discussed further below). Different combinations of cybersecurity controls may be deemed appropriate for different devices. Across all the controls in the draft guidance, there is no sense of prioritization, and it’s not clear that any specific control will be required in all devices.
That is why it is especially good that the House legislation designates coordinated vulnerability disclosure and management, patching, and the SBOM as nonoptional minimums for device security. As Congress develops more experience with cybersecurity legislation, it may find that its list of minimum measures grows.
In the short term, for other cybersecurity controls, the House legislation would make the FDA decide what is required and what is optional. The draft guidance recommends at least 46 specific security controls. (Other cybersecurity guides likewise include long lists of recommended security measures: NIST SP 800-171 has 110 controls for certain government contractor systems, while the NIST Cybersecurity Framework covers just over 100 outcomes with reference to half a dozen standards, each with multiple parts, and the control catalog spreadsheet for NIST SP 800-53 for government systems has 1,190 rows!)
In deciding which of the recommended controls are necessary to “demonstrate a reasonable assurance of safety and effectiveness,” the FDA could look to the structure of the Security Rule adopted under the Health Insurance Portability and Accountability Act (HIPAA). That rule, although in some ways seriously outdated, has a pragmatic framework of general requirements, each followed by implementation specifications that are either required or “addressable” (the latter meaning the regulated entity has to consider the implementation specification and if it finds that it is not reasonable and appropriate, it must document why and implement an equivalent alternative measure if reasonable and appropriate). Such a model could be applied to the long list of security measures in the FDA guidance, avoiding the one-size-fits-all problem.
In deciding between what is required and what is addressable through alternatives, the FDA should also build in at least two kinds of adaptability. First, experience may reveal that some security measures that were initially deemed universally applicable are in fact not appropriate for all entities or all devices. So the FDA will have to be agile in adjusting the line between required and addressable elements. Second, threats and defenses will change over time, sometimes rapidly, so the FDA will need a process to reassess its choices to be sure they are not outdated.
The Challenge of Risk Assessment
There is a deeper challenge in the FDA guidance that permeates the cybersecurity field as a whole: risk assessment—specifically, self-assessment of risk. As the FDA states in the new draft guidance, “given the evolving nature of cybersecurity threats and risks, no device is, or can be, completely secure.” Therefore, cybersecurity aims for risk management, not risk elimination. But what constitutes sound risk management for any device, software product, or system depends on risk assessment, which starts with threat modeling. Quoting from the FDA guidance: “Threat modeling includes a process for identifying security objectives, risks, and vulnerabilities across the system, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system throughout its lifecycle. It is foundational for optimizing system, product, network, application, and connection security when applied appropriately and comprehensively.”
In the first instance, threat modeling and risk assessment must be done by the product developer. If one is to have security by design, then a risk assessment must begin with project ideation and must rely heavily on the product developer. But what if the developer overlooks or minimizes certain risks? Deeper threat analysis and risk assessment might require bringing in people with different skill sets. In its premarket submission to the FDA, the product developer must document its risk assessment (and document it in great detail under the new draft guidance). How is an outsider (in this case, the FDA) able to assess the assessment? NIST has issued a set of resources for a risk management framework (RMF). It’s not quite a case of turtles all the way down, but the RMF points back to a guide NIST issued for conducting risk assessments in 2012. That guide contains what looks like a pretty complete list of threats to be considered in a risk assessment, but it concluded that “[t]here are no specific requirements with regard to: (i) the formality, rigor, or level of detail that characterizes any particular risk assessment; (ii) the methodologies, tools, and techniques used to conduct such risk assessments; or (iii) the format and content of assessment results and any associated reporting mechanisms. Organizations have maximum flexibility on how risk assessments are conducted … .”
Enforcement Is Always a Key Question
In the end, the effectiveness of any cybersecurity regulatory structure will depend on the skills, capacity, and culture of the organization overseeing and enforcing the system. In the case of medical devices, there is already an enforcement system: For certain devices, device makers must seek FDA approval through the premarket submission process. Under the House bill, the secretary of health and human services may find that the cybersecurity information provided in the premarket submission is inadequate. This should include a determination that the product developer’s risk assessment was in fact comprehensive, as the foundation for deciding whether its assemblage of controls is adequate.
But the FDA is one busy agency, with a huge remit. According to the agency’s budget request, there are 233,000 different types of medical devices on the U.S. market, manufactured at 27,000 facilities worldwide. (The House legislation does little to address this enormous deployed base.) The FDA’s Center for Devices and Radiological Health handles over 20,000 submissions each year (counting meeting requests) as well as reviewing over a million medical device adverse event/malfunction reports. The center approves or clears, on average, 12 new or modified devices every business day. The agency as a whole is under huge pressure to act quickly. (Remember how eager everyone was for coronavirus vaccine approval?) Indeed, speeding up the review process is the central goal of the user fee programs that provide the vehicle for the House’s cybersecurity language.
The agency’s 2023 budget request includes a very modest added $5,000,000 (six full-time-equivalent staff positions) for device cybersecurity. Given the nationwide competition for workers with cybersecurity skills, filling those six positions may be hard. And given the detailed documentation required under the draft guidance, the FDA is going to need a lot more resources if it is going to truly assess the adequacy of the submissions it will receive.
Conclusion: An Experiment Worth Undertaking
If the United States is ever to expect significant improvement in cybersecurity of essential products and critical infrastructure, it must have a system of oversight and accountability. That system must meet the competing criteria of certainty and flexibility, stability and adaptability, mandate and innovation. In my view, the only way to build that system is sector by sector, relying on sector-specific agencies to do the kind of detailed work that the FDA has done in the new draft guidance on medical device cybersecurity—and then overseeing its implementation. If the cybersecurity language in the House bill is not weakened by the Senate, we will have an excellent experiment, not only yielding improvements in medical device security but also providing a model for other sectors.