Ai can’t replicate this key part of practicing medicine
Ai can’t replicate this key part of practicing medicine"
- Select a language for the TTS:
- UK English Female
- UK English Male
- US English Female
- US English Male
- Australian Female
- Australian Male
- Language selected: (auto detect) - EN
Play all audios:
I’ve heard “WebMD said it could be cancer” countless times in my 15 years working as an emergency medicine physician. I get it: When someone is feeling unwell or hoping a worrying symptom
will go away, it makes sense for them to turn to easily accessible resources. As people become increasingly familiar with artificial intelligence platforms like ChatGPT, it’s only a matter
of time before patients turn to these tools in search of a diagnosis or second opinion. Change is already on the horizon. ChatGPT passed the United States Medical Licensing Exams, the series
of standardized tests required for medical licensure in the U.S. And recently the New England Journal of Medicine announced NEJM AI, a whole new journal devoted fully to artificial
intelligence in clinical practice. These and many other developments have left many wondering (and sometimes worrying) what role AI will have in the future of health care. It’s already
predicting how long patients will stay in the hospital, denying insurance claims, and supporting pandemic preparedness efforts. While there are areas within medicine ripe for the assistance
of AI, any assertion that it will replace health care providers or make our roles less important is pure hyperbole. Even with all its promise, it’s hard to imagine how AI will ever replicate
that gut feeling honed by sitting at the bedside and placing our hands on thousands of patients. Recently, I had an encounter that revealed one of the limitations for AI at the patient’s
bedside, now and perhaps even in the future. While working in the emergency room, I saw a woman with chest pain. She was relatively young without any significant risk factors for heart
disease. Her electrocardiogram — a tracing of the heart’s electrical activity used to diagnose a whole host of cardiac maladies, including heart attacks — was perfect. Her story and physical
exam weren’t particularly concerning for an acute cardiac event, either. And the blood tests we sent — including one that detects damage to heart muscle from a heart attack — were all
normal. Based on nearly every algorithm and clinical decision rule that providers like me use to determine next steps in care of cases like this, my patient was safe for discharge. But
something didn’t feel right. I’ve seen thousands of similar patients in my career. Sometimes subtle signs can suggest that everything isn’t OK, even when the clinical workup is reassuringly
normal. It’s hard to say exactly what tipped me off that day: the faint grimace on her face as I reassured her that everything looked fine, her husband saying “this just isn’t like her,” or
something else entirely. But my gut instinct compelled me to do more instead of just discharging her. When I repeated her blood tests and electrocardiogram a short while later, the results
were unequivocal — my patient was having a heart attack. After updating her on what we discovered and the treatment plan, she was quickly whisked away for further care. Most people are
surprised to learn that treating patients in the emergency department involves a hefty dose of science mixed with a whole lot of art. We see some clear-cut cases, the type covered on board
exams that virtually every provider would correctly diagnose and manage. But in reality only a small percentage of the patients we see have “classic” black-and-white presentations. The same
pathology can present in completely different ways, depending on a patient’s age, sex, or medical comorbidities. We have algorithms to guide us, but we still need to select the right one and
navigate the sequence correctly despite sometimes conflicting information. Even then, they aren’t flawless. It’s these intricacies of our jobs that likely cause many providers to cast a
suspicious eye at the looming overlap of artificial intelligence and the practice of medicine. We’re not the first cohort of clinicians to wonder what role AI should play in patient care.
Artificial intelligence has been trying to transform healthcare for over half a century. In 1966 an MIT professor created ELIZA, the first chatbot, which allowed users to mimic a
conversation with a psychotherapist. A few years later a Stanford psychiatrist created PARRY, a chatbot designed to mimic the thinking of a paranoid schizophrenic. In 1972, ELIZA and PARRY
“met” in the first meeting between an AI doctor and patient. Their meandering conversation focused mostly on gambling at racetracks. That same year work began on MYCIN, an artificial
intelligence program to help health care providers better diagnose and treat bacterial infections. Subsequent iterations and similar computer-assisted programs — ONCOCIN, INTERNIST-I, Quick
Medical Reference (QMR), CADUCEUS — never gained widespread use in the ensuing decades, but there’s evidence that’s finally changing. Health care providers are now using artificial
intelligence to tackle the Sisyphean burden of administrative paperwork, a major contributor to health care worker burnout. Others find it helps more empathetically communicate with their
patients. And new research shows that generative AI can help expand the list of possible diagnoses to consider in complex medical cases — something many of the students and residents I work
with sheepishly report using from time to time. They all admit AI isn’t perfect. But neither is the gut instinct of health care providers. Clinicians are human, after all. We harbor similar
assumptions and stereotypes as the general public. This is clearly evident in the racial biases in pain management and the disparities in maternal health outcomes. Further, the sentiment
among many clinicians that “more is better” often leads to a cascade of over-testing, incidental findings and out-of-control costs. So even if AI won’t replace us, there are ways it could
make us better providers. The question is how do we best combine AI and clinical acumen in a way that improves patient care. AI’s future in health care brings many important questions, too:
For medical trainees who rely on it earlier in their training, what impact will it have on long-term decision-making and clinical skill development? How and when will providers be required
to inform patients that AI was involved in their care? And perhaps most importantly now, how do we ensure privacy as these tools are more embedded in patient care? When I started medical
school two decades ago, my training program required the purchase of a PalmPilot, a personal handheld computer. We were told that these expensive devices were necessary to prepare us to
enter medicine at a time of cutting-edge innovation. After I struggled with software and glitches, mine stayed in my bag for years. The promise that this technology would make me a better
provider never translated to patient care. In a 1988 interview on the promise of AI in health care in the Educational Researcher, Randolph Miller — a future president of the American College
of Medical Informatics — predicted that artificial intelligence programs for solving complex problems and aiding decision making “probably will do wonders in the real worlds of medicine and
teaching, but only as supplements to human expertise.” Thirty-five years and several AI winters later, this remains true. It seems all but certain that AI will play a substantial role in
the future of health care, and it _can _help providers do our jobs better. It may even help us diagnose those heart attacks that even experienced clinicians can sometimes miss. But until
artificial intelligence can develop a gut feeling, honed by working with thousands of patients, a few near-misses, and some humbling cases that stick with you, we’ll need the healing hands
of real providers. And that might be forever. _Craig Spencer is a public-health professor and emergency-medicine physician at Brown University._
Trending News
NJTV News | Michael Kazimir | PBSHOW TO WATCH NJTV NEWS NJTV News is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, ...
Sea passenger statistics, all routes: 2022Accredited official statistics SEA PASSENGER STATISTICS, ALL ROUTES: 2022 Sea passenger statistics for movements to and ...
The netherlands may be the first country to hit the limits of growthThe other morning I cycled around the Dutch town where I grew up. Behind our old house, the field where I spent half my ...
Farmers okay tomorrow’s talks in letter to govt, reiterate agendaOn Monday, the Centre invited the 40 protesting farmer unions for talks on Wednesday, with Agriculture Secretary Sanjay ...
KEVIN YEUNG | South China Morning PostAdvertisementKEVIN YEUNG+ FOLLOWThe Porsche Boxster Spyder gives Kevin Yeung the drive of a lifetime28 Feb 2014 - 10:11A...
Latests News
Ai can’t replicate this key part of practicing medicineI’ve heard “WebMD said it could be cancer” countless times in my 15 years working as an emergency medicine physician. I ...
2025 Medicare Costs Rise; Grandparents Money Mistakes1:01 AARP Videos The AARP Minute 2025 Medicare Costs Rise; Grandparents Money Mistakes Facebook Twitter LinkedIn In toda...
Malaria in Ceylon | NatureTHERE is unfortunately no doubt about the serious nature of the epidemic of malaria in Ceylon. The telegrams in the Pres...
Cme group's terry duffy: 'french wine and short ribs go a long way in putting big deals together'Lucy Burton Financial services editor 07 April 2018 4:00pm BST When the former Tory party treasurer Michael Spencer arri...
Avengers endgame deleted scene proves thanos is aliveAvengers Endgame had Thanos die twice. The original timeline’s 2018 version had his head chopped off by Thor a few weeks...