Health Hackers: Problems in Applying Traditional Products Liability Theories to Latent Cyber-Vulnerabilities in Medical Devices
October is National Cyber Security Awareness Month (yes, that’s a thing), so it seems fitting to write about an unprecedented alert recently released by the FDA to health care providers that warned of a medical device’s vulnerability to cyber-attack. The subject device was a now-discontinued infusion pump system made by Hospira that was capable of communicating with the health care provider’s network. As such – by definition, really – it could be hacked (though evidently with greater ease than usual here). Interestingly, though, the FDA warning read a lot like an FDA response to a verified product defect. Perhaps that makes sense. But should cyber-security vulnerabilities really be treated the same as design defects under traditional products liability law? While government-issued medical-device security standards from FDA and Department of Homeland Security appear inevitable and necessary, cyber-vulnerabilities seem distinguishable from conventional design defects in the medical products liability context for several reasons.
First, regardless of the plaintiff’s theory of liability, the entire cause of action against the device manufacturer would be predicated on a targeted intentional tort committed by a third party. In fact, the cyber-attack giving rise to the claim would also constitute a criminal act in most cases. But if not pursued or prosecuted by law enforcement, it is very unlikely that the parties to any litigation arising out of the act will have the resources or sophistication to identify the hacker. From a causation standpoint, then, this unnamed but clearly-at-fault party is the elephant in any jury room.
Second, even if a device has its own firewall, security key or other built-in feature, in theory those measures could all be circumvented by an authenticated user on the health care provider’s network. So if a hospital’s main network were to be compromised by a hacker masked as a user authorized to access the device, the device’s self-contained security apparatuses may be moot anyway. In a defective design suit that followed such an incident, liability would therefore need to be apportioned between: 1) the health care provider under a negligence or malpractice theory for failing to maintain a secure network; 2) the device manufacturer under a strict-liability or negligence theory, and 3) the hacker. That comparative fault schema becomes even more convoluted where the patient’s device is communicating to a health care provider via a mobile wireless carrier or via the patient’s home network (which may be unsecured altogether).
In addition to these difficult design defect issues, an injured medical device plaintiff will almost certainly throw in a failure-to-warn claim against the manufacturer as well; which begs the related questions of who has that duty to warn of a cyber-vulnerability and how the learned intermediary doctrine factors in as a defense. Most of these devices are not marketed directly to the public. So are the physicians supposed to be the learned intermediaries for relaying this new species of risk? I hope not: I know some superb physicians who can’t set up email on their iPhones, much less assess and explain to a patient the potential risk of a cyber-attack on a networked procedure or device. That’s just not necessarily in a physician’s skill set. I suppose the hospital’s IT staff could always visit the patient as the “learned intermediary” to assess the risk and give the option of an air-gapped or offline procedure (if that’s even possible) to take some heat off the manufacturer. But regardless of who is deemed competent to give an informed warning here, any given risk analysis is inherently arbitrary and speculative because the occurrence of the adverse event depends entirely on the intentional tort (and, likely, sophistication) of a third-party malicious hacker.
Finally, there is the glaring question of whether the gravity of this particular threat really warrants the magnitude of the response and the cost to the both the public and private sectors. While extra-stringent cyber-security precautions might be practical for patients who require personal security in general (such as high-ranking government officials, dignitaries, or others subject to a specific personal threat), the average patient is generally not a target. And, as a whole, it’s probably safe to say that the sick and infirm are not a top-priority population for this kind of terroristic activity. So, other than imposing some basic government-issued standards for user-authentication and firewalls on all networked medical devices, would the benefits of heightened security measures beyond that really justify the costs? Consider this question not only from a common-sense policy standpoint, but from the standpoint of a jury performing a risk-benefit analysis in a traditional design defect claim. Would the state of the art require that every networked device be implemented with hardware-based security measures (e.g., data diodes) as if it were part of a critical infrastructure network like the bulk-power grid or military defense systems? And at what point would the manufacturer’s liability be measured in this context? Network and device security standards can evolve quickly; so a device that was perfectly secure at the time of manufacture and that is otherwise still useful in saving lives might be junked because its processor or physical memory can’t handle the new firmware update or software security patch. And what if the required update can be installed, but doing so may tax the device’s core CPU to the point of potential hardware failure? The point is that the monetary and transactional costs of disproportionately high cyber-security standards for networked medical devices will often outweigh the benefits, especially in light of the slight risk of a targeted attack.
So what’s the solution? If traditional design-defect-type responsibility does ultimately rest with the manufacturer under strict liability, then the liability question should begin and end with the manufacturer’s compliance with published FDA and DHS security standards at the time of manufacture. Manufacturers might also be required to initially disclose the expected capabilities of the device’s hardware to handle any future security software and firmware upgrades that might be required. This limitation on strict products liability makes sense because it encompasses the scope of the manufacturer’s control over the device. As for further liability under theories sounding in negligence, the government-published regulations should serve to establish a bright-line standard of care for both manufacturers and defendant health care providers for actions claiming inadequate cyber-security leading to harm. To remain compliant and avoid liability, the manufacturers should only be required to develop and issue software security updates as necessary to meet the minimum published standards (up to the device’s maximum capability) and the health care providers would undertake the sole duty of implementing them within a reasonable set period of time. Any higher (or murkier) bar would seem to be untenable in this arena, as inclined hackers will likely find a workaround to the newest government standards as soon as (if not before) they are even published.
And one last thought on the semantics at play here with regard to patient warnings. The putative warning: “it’s possible that a hacker could infect your device” – when said to a particularly tech-illiterate patient who only associates the word “hack” with its machete or meat-cleaver meaning in the traditional vernacular context, and the word “infect” with its associative meaning in the immediate medical context – might be cause for undue alarm. Arguably, though, these two contexts are analogous from a legal and proximate cause standpoint: that is, whether a malicious third-party actor manipulates a medical device from a coffee shop in Jakarta vis-à-vis the Internet, or he physically gains unauthorized access to a hospital room in Springfield to manually sabotage a device, the legal effect of either act to a regulatory-compliant manufacturer should be the same…no liability.