Medical Care and Artificial Intelligence as a Judgment

Medical Care and Artificial Intelligence as a Judgment

Since February 2019, tens of thousands of patients ended up in hospital, at one of Minnesota’s largest healthcare organizations. Moreover, their discharge planning decisions have been formed with the help of an artificial intelligence model. Nevertheless, few, if any, of those patients have any knowledge concerning the artificial intelligence involved in their care.

That is because frontline clinicians at M Health Fairview generally do not mention the artificial intelligence whirring behind the scenes in their conversations with patients.

Clinicians are turning to artificial intelligence-powered decision supports tools too. This is happening at a growing number of prominent clinics and hospitals. Nevertheless, many of those tools remain unproven. Clinicians use them to help to predict whether patients are likely to deteriorate or develop complications. They also use it to assess whether they are at risk of readmission, and they are likely to die soon. Patients and their family members are often not informed about or asked to consent to the use of those tools in their care. That is what the STAT examination has found.

Thus, machines are entirely invisible to patients. They increasingly guide decision-making in the clinic.

Glenn Cohen works at Harvard Law School. He is a professor there. Cohen said that clinicians and Hospitals operate under the assumption that you do not disclose information. That is not something that people have thought about or defended. The professor is the author of one of only a few articles examining the issue. They have received surprisingly scant attention in the medical literature, even as research about machine learning and artificial intelligence proliferates.

Artificial Intelligence

There is some room for harm in some cases. Patients may not need to know about an artificial intelligence system that is nudging their doctor to be more thoughtful. For example, there are algorithms that encourage clinicians to broach conversations on ending lives. Nevertheless, in other cases, a lack of disclosure means that patients will never know what happens if an artificial intelligence model makes a faulty recommendation. That is a part of the reason institutions deny patients their needed care, or they undergo harmful, unnecessary, intervention.

That is a real risk. This is all because those artificial models have bias. Some of them that have been demonstrated to be mainly accurate have not yet been shown to improve outcomes of patients. Some hospitals do not share data on how well these systems currently work. They justify the decisions on the ground that they do not do research on such topics. Nevertheless, that means that patients do not have information about the tools that hospitals use in their care. Moreover, it means that they do not know if devices are helping them.

Some worry that bringing up artificial intelligence will derail clinicians’ conversations with patients. This will thus divert attention and time away from actionable steps that patients can take to improve their quality of life and health. Doctors say that they (and not artificial intelligence) make the decisions concerning care. After all, an artificial intelligence system’s recommendation is just one of many factors that clinicians consider before they decide on a patient’s care.