To quickly define terms, AI is the capability of a computer system to mimic human cognitive functions, such as learning and problem solving. A large language model (LLM) is a type of AI that uses deep learning techniques and large data sets to understand, summarize, generate and predict new content. ChatGPT, powered by LLM, is a generative AI model designed to understand and produce human-like text responses based on input provided. Released last November by OpenAI, ChatGPT now has 100 million users worldwide; alternatives include Google’s Bard and Microsoft’s Bing.
We share an early overview of some of the most compelling benefits and drawbacks of AI’s use in medicine, albeit with a few crucial caveats. While the rise of AI may be viewed as alarming, keep in mind that it is a nascent, still-evolving technology. What is true today will be superseded by new developments, improvements and regulations tomorrow. Additionally, the physician’s oath to ‘first, do no harm’ will continue to guide medicine’s measured approach to implementing technological advances. If you’re interested in learning more, we recommend the M.I.T. Technology Review podcast ‘In Machines We Trust’, and the books The AI Revolution in Medicine: GPT-4 and Beyond by Lee, Goldberg and Kohane, and Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again by Eric Topol, MD.
Technology titans like Microsoft co-founder Bill Gates have described its promise in sweeping terms: “AI is on the verge of making our lives more productive and creative. But it also has the potential to help us solve some of society’s biggest challenges, like improving healthcare, saving energy, and making it easier to feed the world,” he said. Dr. Andrew Ng, a recognized pioneer in machine learning described it as the “new electricity,” adding “I have a hard time thinking of an industry that I don’t think AI will transform in the next several years.”
In medicine, the potential is particularly exciting, according to Eric Topol, MD, a renowned physician-scientist and futurist. “The next big thing is multimodal AI, which collects all the data that makes us unique—anatomical imaging, physiological sensors, genome, microbiome, metabolome, immunome, environmental and social determinants, our electronic health records with lab results, family history and longitudinal follow-up—along with sources of medical knowledge, and quickly processes and analyzes it. Once you do that, you not only can better manage a condition like diabetes or hypertension in real-time, but in the future, prevent conditions that people are at high risk for from ever occurring.”
Douglas Grimm, attorney and healthcare practice leader at ArentFox Schiff also views AI’s predictive capabilities as its greatest promise. “AI may someday inspire a paradigm shift in care – instead of the patient calling the physician at 3 a.m. with concerning symptoms, the physician will have earlier received an analysis of the patient’s risk based on data from AI-enabled remote monitoring, and proactively guided them to prevent a cardiac event.”
For all its potential however, Grimm recommended a cautious approach to AI, due to a lack of regulation regarding data security and confidentiality as well as the need for guardrails to mitigate potential medical misinformation.
American Medical Association President Jesse Ehrenfeld, M.D., M.P.H, expressed the concerns of many in healthcare when he told us: “While AI-enabled products show tremendous promise in helping alleviate physician administrative burdens and may ultimately be successfully utilized in direct patient care, OpenAI’s ChatGPT and other generative AI products currently have known issues, including fabrications, errors, and inaccuracies. For AI-enabled tools to truly live up to their promise, they must first earn—and then retain—the trust of patients and physicians. Just as we demand proof that new medicines and biologics are safe and effective, so must we insist on clinical evidence of the safety and efficacy of new AI- enabled healthcare applications.”
According to Alan Karthikesalingam, MD, PhD, Google Health’s lead researcher on Med-PaLM 2, an AI tool that made headlines for achieving 85% accuracy on the U.S. medical licensing exam: “AI on its own cannot solve all of healthcare’s problems. Data and algorithms must be combined with language and interaction, empathy and compassion. What makes us healthy is complicated.”
Tinglong Dai, PhD, professor at Johns Hopkins University who has extensively studied AI’s effects on healthcare, said he has high confidence in its assessment of radiological images, but lower confidence in its ChatGPT guidance. “AI can eventually serve as a very capable colleague, and the physicians I work with here are amazed at its accurate, and even compassionate responses. But 20% of the time the advice is completely wrong or unfounded—it’s like an eager medical student who wants to make an impression on their professors and tries to pick up patterns, but misses the underlying logic. Right now it’s still being tested and used in situations where no harm can occur, but if people start relying on it, that would be dangerous.”
Dr. Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School, advised: “At present, AI should be used for where human beings are the weakest — namely, in knowing everything about all their patients and being as alert at 6:00 in the evening as they are at 8:00 in the morning. I don’t think that AI should be used instead of the human intuition, the human contact, and the human common sense that doctors bring to their patient interactions.”
As an addition to the physician’s growing toolbox, AI has potential value, believes Specialdocs Consultants CEO Terry Bauer, a senior healthcare executive who’s worked with thousands of doctors in his decades-long career. “It could help practices with administrative tasks, data entry and report generation and possibly claims documentation and denial management. AI may also enhance the diagnostic process, and as a result, minimize unnecessary testing. All this said, I cannot envision AI matching the judgment, intelligence or experience of a dedicated physician who thoroughly examines and listens to their patients.”
When asked about its own future, ChatGPT thoughtfully responded: “Ensuring patient privacy, addressing biases in AI, and maintaining the human touch in healthcare are critical considerations that must be addressed. ChatGPT is not a replacement for human expertise but a valuable ally in the pursuit of better healthcare outcomes for all.”
AI in Action in Medicine
From early disease detection to accelerated drug discovery to 24/7 virtual health assistants, the applications for AI abound. Below are just a few examples of AI being utilized in healthcare:
✚ At Google Health, AI research led to the development of an automated tool that uses an AI camera to detect diabetic retinopathy in less than two minutes.
✚ At Cedars Sinai, investigators are leveraging AI’s algorithms to identify early signs of pancreatic cancer, and to predict the likelihood of coronary heart disease and sudden cardiac arrest.
✚ At Mayo Clinic, the cardiology team uses AI-guided electrocardiograms to detect faulty heart rhythms before symptoms appear, and to identify the presence of a weak heart pump, preventing future heart failure.
✚ At the AI & Tech Collaboratory for Aging Research at Johns Hopkins, the team is exploring robots that can help patients with cognitive impairments, dementia or Alzheimer’s navigate daily living tasks; using Alexa to administer cognitive tests at home; and configuring Apple Watches to provide alerts of possible falls or wandering.
Sources:
AI in Healthcare with Dr. Eric Topol https://youtu.be/s7vur7ckBE0?si=_9sewIVcAAHc2n1g
AI Will Make Medicine More Human Again https://youtu.be/zmID4msEk-Y?si=qzwFsRBUE2gsNT0U
Groundbreaking Research in Health AI, The Check Up, Google Health https://youtu.be/3Ud-BMOCkDI?si=dOsnjb4LMKiinMta
Is Medicine Ready for AI? NEJM podcast https://www.nejm.org/action/showMediaPlayer?doi=10.1056%2FNEJMdo007065&aid=10.1056%2FNEJMp2301939&area=
Widner, K., Virmani, S., Krause, J. et al. Lessons learned from translating AI from development to deployment in healthcare. Nat Med 29, 1304–1306 (2023). https://doi.org/10.1038/s41591-023-02293-9
A Better Model of Heart Disease Prediction https://www.cedars-sinai.org/discoveries/better-model-heart-disease-prediction.html
AI in Cardiovascular Medicine https://www.mayoclinic.org/departments-centers/ai-cardiology/overview/ovc-20486648
Can We Trust AI? https://hub.jhu.edu/2023/03/06/artificial-intelligence-rama-chellappa-qa/
[/vc_column_text][/vc_column][/vc_row]