header logo image

Risks and benefits of an AI revolution in medicine – Harvard Gazette

November 12th, 2020 11:58 pm

If you start applying it, and its wrong, and we have no ability to see that its wrong and to fix it, you can cause more harm than good, Jha said. The more confident we get in technology, the more important it is to understand when humans can override these things. I think the Boeing 737 Max example is a classic example. The system said the plane is going up, and the pilots saw it was going down but couldnt override it.

Jha said a similar scenario could play out in the developing world should, for example, a community health worker see something that makes him or her disagree with a recommendation made by a big-name companys AI-driven app. In such a situation, being able to understand how the apps decision was made and how to override it is essential.

If you see a frontline community health worker in India disagree with a tool developed by a big company in Silicon Valley, Silicon Valley is going to win, Jha said. And thats potentially a dangerous thing.

Researchers at SEAS and MGHs Radiology Laboratory of Medical Imaging and Computation are at work on the two problems. The AI-based diagnostic system to detect intracranial hemorrhages unveiled in December 2019 was designed to be trained on hundreds, rather than thousands, of CT scans. The more manageable number makes it easier to ensure the data is of high quality, according to Hyunkwang Lee, a SEAS doctoral student who worked on the project with colleagues including Sehyo Yune, a former postdoctoral research fellow at MGH Radiology and co-first author of a paper on the work, and Synho Do, senior author, HMS assistant professor of radiology, and director of the lab.

We ensured the data set is of high quality, enabling the AI system to achieve a performance similar to that of radiologists, Lee said.

Second, Lee and colleagues figured out a way to provide a window into an AIs decision-making, cracking open the black box. The system was designed to show a set of reference images most similar to the CT scan it analyzed, allowing a human doctor to review and check the reasoning.

Jonathan Zittrain, Harvards George Bemis Professor of Law and director of the Berkman Klein Center for Internet and Society, said that, done wrong, AI in health care could be analogous to the cancer-causing asbestos that was used for decades in buildings across the U.S., with widespread harmful effects not immediately apparent. Zittrain pointed out that image analysis software, while potentially useful in medicine, is also easily fooled. By changing a few pixels of an image of a cat still clearly a cat to human eyes MIT students prompted Google image software to identify it, with 100 percent certainty, as guacamole. Further, a well-known study by researchers at MIT and Stanford showed that three commercial facial-recognition programs had both gender and skin-type biases.

Ezekiel Emanuel, a professor of medical ethics and health policy at the University of Pennsylvanias Perelman School of Medicine and author of a recent Viewpoint article in the Journal of the American Medical Association, argued that those anticipating an AI-driven health care transformation are likely to be disappointed. Though he acknowledged that AI will likely be a useful tool, he said it wont address the biggest problem: human behavior. Though they know better, people fail to exercise and eat right, and continue to smoke and drink too much. Behavior issues also apply to those working within the health care system, where mistakes are routine.

We need fundamental behavior change on the part of these people. Thats why everyone is frustrated: Behavior change is hard, Emanuel said.

Susan Murphy, professor of statistics and of computer science, agrees and is trying to do something about it. Shes focusing her efforts on AI-driven mobile apps with the aim of reinforcing healthy behaviors for people who are recovering from addiction or dealing with weight issues, diabetes, smoking, or high blood pressure, conditions for which the personal challenge persists day by day, hour by hour.

The sensors included in ordinary smartphones, augmented by data from personal fitness devices such as the ubiquitous Fitbit, have the potential to give a well-designed algorithm ample information to take on the role of a health care angel on your shoulder.

The tricky part, Murphy said, is to truly personalize the reminders. A big part of that, she said, is understanding how and when to nudge not during a meeting, for example, or when youre driving a car, or even when youre already exercising, so as to best support adopting healthy behaviors.

How can we provide support for you in a way that doesnt bother you so much that youre not open to help in the future? Murphy said. What our algorithms do is they watch how responsive you are to a suggestion. If theres a reduction in responsivity, they back off and come back later.

The apps can use sensors on your smartphone to figure out whats going on around you. An app may know youre in a meeting from your calendar, or talking more informally from ambient noise its microphone detects. It can tell from the phones GPS how far you are from a gym or an AA meeting or whether you are driving and so should be left alone.

Trickier still, Murphy said, is how to handle moments when the AI knows more about you than you do. Heart rate sensors and a phones microphone might tell an AI that youre stressed out when your goal is to live more calmly. You, however, are focused on an argument youre having, not its physiological effects and your long-term goals. Does the app send a nudge, given that its equally possible that you would take a calming breath or angrily toss your phone across the room?

Working out such details is difficult, albeit key, Murphy said, in order to design algorithms that are truly helpful, that know you well, but are only as intrusive as is welcome, and that, in the end, help you achieve your goals.

For AI to achieve its promise in health care, algorithms and their designers have to understand the potential pitfalls. To avoid them, Kohane said its critical that AIs are tested under real-world circumstances before wide release.

Similarly, Jha said its important that such systems arent just released and forgotten. They should be reevaluated periodically to ensure theyre functioning as expected, which would allow for faulty AIs to be fixed or halted altogether.

Several experts said that drawing from other disciplines in particular ethics and philosophy may also help.

Programs like Embedded EthiCS at SEAS and the Harvard Philosophy Department, which provides ethics training to the Universitys computer science students, seek to provide those who will write tomorrows algorithms with an ethical and philosophical foundation that will help them recognize bias in society and themselves and teach them how to avoid it in their work.

Disciplines dealing with human behavior sociology, psychology, behavioral economics not to mention experts on policy, government regulation, and computer security, may also offer important insights.

The place were likely to fall down is the way in which recommendations are delivered, Bates said. If theyre not delivered in a robust way, providers will ignore them. Its very important to work with human factor specialists and systems engineers about the way that suggestions are made to patients.

Bringing these fields together to better understand how AIs work once theyre in the wild is the mission of what Parkes sees as a new discipline of machine behavior. Computer scientists and health care experts should seek lessons from sociologists, psychologists, and cognitive behaviorists in answering questions about whether an AI-driven system is working as planned, he said.

How useful was it that the AI system proposed that this medical expert should talk to this other medical expert? Parkes said. Was that intervention followed? Was it a productive conversation? Would they have talked anyway? Is there any way to tell?

Next: A Harvard project asks people to envision how technology will change their lives going forward.

Sign up for daily emails to get the latest Harvardnews.

Read the original:
Risks and benefits of an AI revolution in medicine - Harvard Gazette

Related Post

Comments are closed.


2024 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick