header logo image

Consider the AI Influence on Medical Liability | Holland & Hart – Persuasion Strategies – JDSupra – JD Supra

January 31st, 2021 2:46 am

Artificial Intelligence (AI) continues to evolve and to incorporate its way into our lives. Versions of AI now routinely tell Americans where to eat, what routes to take, and what movies to watch. Artificial Intelligence isalso making in-roads into medical decision-making, asdiagnosis and treatment recommendations become more personalized. That has raised concerns from some commentators, who have suggested that Americas tort law system could prove a problematic fit with medical AI. If jurors see the act of listening to the computer as something that deviates from a doctors judgment and standard of care, than machine-centered advice could increase or complicate medical liability risk.

Researchers (Price, Gehrke & Cohen, 2021) looked at that question of whether juror attitudes might be a barrier to reliance on medical AI. The article, How Much Can Potential Jurors Tell Us About Liability for Medical Artificial Intelligence? focused on the circumstances under which jurors would hold a physician liable for following or not following an AI recommendation. Testing the response of the juror-eligible population to four scenarios, they found that following the AI recommendation does not appear to create unique liability risks for the physician: The experiments suggest that the view of the jury pool is surprisingly favorable to the use of AI in precision medicine. As a result, they concluded thatcivil liability is unlikely to hinder the acceptance of medical AI, at least not based on fear that jurors will distrust it. In this post, Ill look at the research and its implications.

The Research: Doctors Arent More Liable if They Listen to AI

The research emerged in response to the fear that medical AI could be like the driverless car, with people thinking, I can see how that would work in theory but Im not ready for technology to be making those decisions.The attitude, termed algorithm aversion relates to the perceived loss oflocus of controlin deferring our judgments to technology, even when that technology might be more resistant to human error.

To test whether this applies to medical AI, the research team conducted anonline experiment with a representative sample of 2,000 American adults. Participants reacted to one of four scenarios in which an ovarian cancer patient is given recommendations from a medical AI system called Oncology-AI, that advice is either for standard or nonstandard care, and the physician either accepts or rejects the AI recommendation.

The results suggest that jurors tend to favorbothstandard treatment, as well as following the AI recommendation. However, the physicians judgment does not automatically trump the AI recommendation, and physicians may be judged more harshly for rejecting the advice of a state-of-the-art tool. That mattered even when the AI recommendation was nonstandard: If physicians receive a nonstandard AI recommendation, they do not necessarily make themselves safer from liability by rejecting it.

The bottom line is that, all other things being equal, doctors tend to reduce their liability by accepting, rather than rejecting, the advice from AI.

The Implication: It Is About Normalization

The main implication is that tort law doesnt impose as much of a barrier to medical AI as some have suggested. One reason for that might be the increasing normalization of AI technology. As the article discusses, jurors focus on what seems normal, and theyre encouraged to do that based on definitions of the standard of care: Jurors evaluate what the normal or average physician would do. So, if high technology tools are not surprising in a medical context, then the typical conventional physician should adhere to these tools.

That, of course, doesnt mean that an AI recommendation will be right every time, or that blindly following it is the way to reduce liability, but it does suggest that a plaintiffs theme focusing purely on substituted judgment (i.e. she surrendered her own medical choices and just let the machine chose)may not be successful. After all, doctors are expected to use the best technology, anddisregardingthat advice might be riskier in the long run. The authors predict As AI becomes more common, any tort law incentive to accept AI recommendations will only strengthen further.

The broader point is that jurors tend to look for what fits within the range of expected and normal actions. Even in non-AI related cases, physicians will have a strong incentive to teach the jury what is typical and to normalize the knowledge, choices, and actions of the defendant.

____________________

Price, W. N., Gerke, S., & Cohen, I. G. (2021). How Much Can Potential Jurors Tell Us About Liability for Medical Artificial Intelligence?. Journal of Nuclear Medicine, 62(1), 15-16.

Read the original post:
Consider the AI Influence on Medical Liability | Holland & Hart - Persuasion Strategies - JDSupra - JD Supra

Related Post

Comments are closed.


2024 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick