2. Involve the patient by utilizing an informed consent or shared decision-making approach. Discussing the clinical ambiguity arising from an algorithm, and securing the patient’s understanding, could help avoid plaintiffs later alleging they were left in the dark.
“Some physicians might think this recommendation reflects an admission of professional weakness and fear it could arouse patients’ anxiety,” the authors wrote. “From an ethical perspective, though, there not only seems nothing morally wrong with this strategy, but many patients might admire their physician’s honesty and truthfulness, increase their trust accordingly, and contribute significant observations about how they view the risks and benefits of the treatment options being considered.”
3. Work with AI vendors to develop contractual agreements stipulating how liability will be assigned in the event of a verdict or settlement. Certain adverse events may perplex parties as to who is at fault. Such situations can be “next to impossible” to resolve when multiple designers, hardware manufacturers, coders, programmers, etc., are involved. Previous literature has recommended reaching “comprehensive contracts,” detailing who is responsible in the event of a poor outcome.
“Of course, the devil will be in the details of these arrangements,” Banja and co-authors wrote. “But working them out in advance of any allegation of patient harm might eliminate expensive, protracted, and rancorous debates when a settlement or jury award is announced.”
4. In the case of autonomous models that make their own decisions, ask the vendor to assume total liability for their product’s incorrect decisions. If AI technology is so advanced that the radiologist is completely removed from the equation, “it would be unfair to enjoin them in malpractice proceedings,” the authors argued. They gave the example of the firm IDx, which offers a tool that alerts providers of the possibility of diabetic retinopathy. The company has echoed these concerns and assumes liability for its product’s decisions, but not for any care management issues that subsequently stem from them.
You can read much more of their advice in the JACR here.