Maximilian Kiener is Research Fellow in Philosophy at the University of Oxford. And while I have been blogging about the need to promptly disclose to patients when patient data has been acquired or dumped by threat actors, Kiener has been writing about the need for doctors to expand our concept of what constitutes the kind of “inherent risks” that need to be disclosed to patients. Noting that many of us would not think of cyberattacks as “inherent” risks that we can predict or should warn patients about, he writes:
We know that cyberattacks on medical devices and hospital networks are a growing threat. During the current pandemic, some types of cyberattacks have increased by 600 per cent.
And it’s not just old computer systems that are vulnerable. Even the very best artificial intelligence (AI) in medicine can be compromised.
Academic research continually reveals new ways in which state-of-the-art AI can be attacked. Such attacks can block life-saving interventions, undermine diagnostic accuracy, administer lethal drug doses, or sabotage critical moves in an operation.
But given that most doctors may not be particularly sophisticated about AI, what/how should they disclose any risks to patients?
When algorithms play an increasingly large role, we also need to think about whether doctors should disclose the risk that these algorithms are systematically biased or the risk that, because of the opacity of certain AI systems, doctors may no longer be able to understand and double-check the AI’s decisions.
Read Kiener’s column. At some point, we may wind up asking, “Do we have to warn patients about everything and anything going wrong for reasons we may not even recognize or understand?” Will we terrify patients and leave them afraid to undergo necessary medical care or surgeries?
There’s a lot to think about.