H. Jack West, MD

Will AI have an algorithm for wishful thinking? Human error as a valued feature in cancer care

As I’ve previously noted, it is becoming increasingly clear that machine learning can match and in many ways exceed the capacities of human physicians, including specialists like medical oncologists, who are faced with mounting challenges of keeping up with more and more new publications, along with greater complexity of molecularly driven cancer care.  The question in the future won’t be whether computer-aided decision making is part of medicine, but rather how the roles of computer algorithms and humans will be integrated and divided.  Even the biggest proponents of machine learning in medicine feel that there will always be a role for people in directing the critical interpersonal component of support and care that has always been a valued part of medicine, even in times when we knew too little and had too few active therapies to truly reverse the course of most diseases. But has part of the appeal of humans providing cancer care been the imprecision and even systematic human error oncologists inject, which often takes the form of optimism that critics may characterize as delusional optimism?

Of course, when we consider what constitutes a good physician, deep knowledge is only one facet, along with commitment and bedside manner, with the last of these being especially critical in an emotionally charged field like cancer care. We should anticipate that humans will be no match for artificial intelligence (AI) when it comes to knowledge base or commitment: the evolving automated processes won’t need to sleep, don’t need family time, and can be programmed to avoid potential conflicts of interest that plague our discussions about human physicians.  But human providers should have an upper hand on AI in the emotional connection of reading and resonating with patients that has always been a central component of health care, even the primary one over most of the long history of medicine.

the_doctor_luke_fildes_crop

If it isn’t intuitively obvious to them, physicians require very little time and training to learn that there is a role for carefully presenting what you know or expect to a patient or family, essentially a filtering process.  Some cases have little or no hope of a favorable outcome, but few patients and families embrace a blunt assessment of hopelessness.  Oncologists are among the specialists who face the dilemma of nuanced truth for a huge fraction of their patients. For some, cure may be possible but elusive. For many others, cure is impossible, but we hope to prolong survival, though too often the prolongation in survival is humbling.  And in a minority of patients, we might recognize case features that lead us to believe that the cancer will respond very poorly and that the patient will have little or no hope of doing well. And even if machine learning algorithms help us identify the best approaches for many people’s cancers, cancer biology will impose limits for many people that the most sophisticated algorithms can’t overcome.  So AI will likely be able to foresee poor outcomes with greater clarity than our fallible human minds can today. But that’s a knowledge that people will prefer to not have.

Today, I see many oncologists who are revered by patients but are not necessarily the most knowledgeable.  Their commitment to their patients and their emotional connection more than overcome their limitations in understanding of a broad array of molecular targets or the latest clinical trial results. Moreover, patients regularly flock to health care providers and institutions that may well provide care based not on the best information but on the most hopeful assessment. Some oncologists administer treatments that are egregiously outside of the best standards of care, but an optimistic prediction of utility can overcome a poor treatment recommendation.  Experienced oncologists learn that many, perhaps most, cancer patients are not inclined to abandon treatment when the evidence-based therapeutic options have been exhausted: later lines of treatment are often motivated by a compulsion to provide one more option and elusive hope, along with the justification that “if I don’t give it, they’ll just to across the street to another oncologist who will.”  Oncologists don’t want to be recognized as “Dr. Death”, even if they’re conveying the reality that there are no remaining treatment options with a meaningful probability of helping.

Overall, then, we should expect that while it is inevitable and desirable that AI has a central role in helping define and recommend the best treatment approaches for patients, and particularly cancer patients, of course there will be an irreplaceable role for humans in translating the best recommendations into patient care and communication delivered in the clinic and bedside. But there is reason for us to question whether the ability of AI to provide the most accurate, evidence-based assessment will actually be what human patients want, or whether the human error of optimism or even frankly “wishful thinking” provided by many of the most beloved oncologists is actually the secret sauce of the practice of oncology. Will machine learning algorithms in medicine have an adjustable “reality vs rainbow-stat” that enables users to modulate how much or little an assessment is dressed up with the most likely vs. best case scenarios (perhaps with limits imposed by what options will be covered by the health care system)?  I firmly believe that there is no best doctor for everyone, since some patients want a more unvarnished truth, while others want an unfailingly supportive cheerleader: will AI algorithms let patients calibrate how high the resolution is on the information they receive? Will humans in the future of cancer care serve a role of filtering the unapologetic and potentially harsh assessment from algorithms that suggest an unfavorable outcome for patients with the most challenging cancers?

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *