Authored by: Peggy Pilon, BSN, MS, RN, VP of Clinical Success
I came across a compelling article written by author Lisa Bannon, a freelance writer, called “When AI Overrules the Nurses Caring for You.” It explores a potential nursing dilemma regarding nurse intuition vs. AI-supported healthcare technologies. In the article, Ms. Bannon discusses a situation where a “nurse’s gut” conflicts with artificial intelligence-driven algorithms, when deciding what is best for patient care. She shares that a Registered Nurse with 15 years of oncology experience “relies on her observation skills to make life-or-death decisions.” So when an alert said her patient in the oncology unit…had sepsis, she was sure it was wrong. “I’ve been working with cancer patients for 15 years so I know a septic patient when I see one,” she said. “I knew this patient wasn’t septic.”
Lisa Bannon goes on to say, “artificial intelligence and other high-tech tools, though nascent in most hospitals, are raising difficult questions about who makes decisions in a crisis: the human or the machine?” But are these situations truly difficult? Or do we need to take a step back and take a closer look?
As we look at nursing, we know experienced nurses are knowledgeable about the field (and numerous other aspects of healthcare) and are able to immediately and comprehensively identify the needs and demands of each patient. Experienced nurses practice intuitively within the Professional Nurse’s Scope of Practice. This nursing practice is developed due to extensive education and training along with excellent critical thinking skills that are applied in daily practice. “Each RN gains knowledge about nursing practice in such a way that every clinical experience becomes a lesson which informs the next experience.”
Nurses Are Humans Too
But nurses are human beings, and human beings can and will make errors. Hence the old adage…“To err is human.” A study performed at Johns Hopkins claimed that 250,000 people in the US die every year due to medical errors. Other reports claim that the number is closer to 440,000/year. The Joint Commission stated in 2022 that there were 1441 sentinel events reported. Clearly these errors reflect the entire medical community and not just the nursing profession. Nevertheless, these numbers are staggering and should give us all pause.
Patient Specifics Matter
It is also important to look at this patient’s particular circumstances. The nurse described taking care of a previously diagnosed leukemic patient who was recently admitted to the hospital. The literature shows that sepsis is responsible for 970,000 hospital admits per year and the numbers keep rising each year. We also know that 1.7 million adults develop sepsis and 270,000 die per year. Sepsis is the leading cause of death in US hospitals, with many stating that there is a 30-50% mortality if a patient becomes septic. Hence, infection prevention continues to be one of the National Patient Safety goals in 2023 from the Joint Commission. In addition, we know that leukemic patients and all patients with cancers of the blood are highly susceptible to sepsis. A leukemic patient in early sepsis can present very differently than a non-cancer patient. It is well known that leukemics are oftentimes immunosuppressed and thereby do not show the normal “pre-sepsis picture” of fever, elevated white blood cell counts, etc. The risk of developing sepsis is increased by 10 times with patients that have any type of cancer, and more than 1 in 5 sepsis hospitalizations are with cancer patients. Ultimately, cancer patients have four times the incidence of severe sepsis when compared to non-cancer patients. This is why the AI triggered the prompt to rule out sepsis in this patient.
The nurse in the article received the notification of potential sepsis in her leukemic patient and did not believe the AI. She states she still “…drew blood from the patient, even though that could expose him to infection and run up his bill.” But what is really involved here? The sepsis alert recommended that the nurse draw blood for a blood test and blood cultures. Yes, this procedure can expose the patient to some risk. However, blood tests and cultures are performed on hospitalized patients on a regular basis. Given the nurse uses a proper aseptic technique, there is no reason a patient would be placed in harm’s way. And yes, the hospital does charge for all laboratory tests performed on patients.
In every part of healthcare, the risks vs. benefits of all patient care are always evaluated in any patient situation. In this case, why would an experienced RN risk a patient with leukemia having sepsis, when all that is required to do are simple blood tests to rule out sepsis? The risk of this patient dying of sepsis far outweighs the cost of a few blood tests. In this instance, the oncology nurse was right, her patient didn’t have sepsis. But what if next time she isn’t right? What if her intuition fails next time? It’s a gamble we can not take in healthcare.
AI in Healthcare: What’s the Point?
The whole reason for AI technologies in healthcare is to support the clinician’s knowledge base and help to prevent medical errors. AI is designed to process more data and identify patterns faster than the human brain does. Plus it can facilitate decision-making by increasing efficiency. Much of the AI- based solutions rely on math and statistics, with the goal of driving greater consistency, improved accuracy, and overall improved efficiency in healthcare delivery. However AI was never intended to replace clinicians at the bedside, but rather help improve their overall experience. The hospital in this case weighed in and stated “…its technology tools are a starting point for further clinical assessment, and protocols such as taking blood after a sepsis alert are recommended, but not required. If a nurse feels strongly this does not make sense for their patient they should use their clinical judgment and contact the doctor,” the medical center said. “The ultimate decision-making authority resides with the human physicians and nurses.” This statement is the guiding principle for using AI in any healthcare setting.
There are many reasons that AI technologies do not have full clinical support. These range from fear of the unknown, change management issues, and potential inaccurate predictions. One of the keys to improving healthcare is using all the available “tools in the toolbox.” We need to continue to educate clinicians about the capabilities and limitations of AI technology, while supporting clinical expertise and intuition.
The Future of AI in Healthcare
We at CalmWave understand the importance of implementing AI technologies in conjunction with championing clinical expertise. CalmWave™ is an operations-based artificial intelligence (AI) platform that captures, analyzes, and synthesizes real-time data from dozens of data sources (monitors, labs, orders, findings, etc.) to empower hospitals with the intelligence critical to improving patient outcomes, optimizing operations, and retaining staff. The AI technology from CalmWave™ presents objective solutions that reduce non-actionable alarms by providing proper alarm management insight to the caregiver, thereby decreasing alarm fatigue, excessive cognitive load, and burnout. At CalmWave, we are interested in supporting clinician workflow so they have time to care for patients.
For the sake of patient safety, there’s enough room at the bedside for both of us…the experienced RN and proven AI technologies. Collaboration is of the utmost importance…our patients are depending on us.
Link to the article: https://www.wsj.com/articles/ai-medical-diagnosis-nurses-f881b0fe