Digital Health
Disruption
Technology
AI in the Ozarks—A Conversation With Dr. Eric Spann of Baxter Health
Eric Spann, MD, is a family medicine physician at Baxter Health, based in Mountain View, Arkansas.
Q: Dr. Spann, how do you envision AI impacting your day-to-day work, your patient interactions and what defines good medicine?
A: If we thought that addressing “Dr. Google” was a big part of our daily work, this is going to be Dr. Google on steroids. Patients are now going to have a large language model that’s giving them answers, and we better be prepared to understand how to use this technology and where these answers come from. Patients are going to be savvy on how to use AI within a short period of time, and if they see that we’re not technically adept at utilizing these technologies, they’re going to lose some level of respect for us for not being on the cutting edge.
I’m a classically trained physician. I’ve been practicing for 31 years. I got on the AI train the day an article came out from Henry Kissinger and Eric Schmidt in The Wall Street Journal back in 2023. That day, I realized that this technology was going to change the way we delivered care to patients. I was still doing my MBA, and I started talking to professors about it, getting certifications so I could understand prompt engineering and how to better use the technology.
That said, I fear that we’re going to get further erosion of that basic skill set that a good physician must have in taking the history, examining a patient and having clinical understanding that comes through deep study, logic and knowing the facts—the necessary things that allow a physician to fully function in a crisis without a book, a tablet or a phone in hand. You don’t want to have a doctor looking something up in a crisis; they better know right away what to do. If you don’t have that strong foundation of the basics that comes from hard work, training and repetition, and you depend on technology—a weak foundation that’s dependent on some external force—you’re like a house with nice furnishings but at risk of crumbling. I fear a lot of physicians, especially younger ones who are growing up depending on these technologies, won’t know what to do when the electricity goes out.
I think that it’s incumbent on our educational institutions to address this, and I see it right now in the master’s specialization program in machine learning and data analytics that I’m pursuing to add to my MBA. I see students who cannot function without AI, without ChatGPT. I don’t want to be critical of the younger generation. Every generation has its own strengths, but something’s been lost. And I’m not some old curmudgeon; I’m into this stuff. I’m on the front edge of it as far as physicians go. But I fear those who are less resilient may depend on it and become weaker, like someone with a crutch.
On the contrary, for those who do have a strong foundation, good work ethic and a deep knowledge of the basics, AI is going to simply revolutionize their ability to gather and synthesize information and be more efficient. If those efficiencies are used on top of that strong foundation, then you’ve got something. It’s a behemoth of a tool if you know how to interact with it properly, suppress it when you need to, prompt that algorithm’s search capabilities to refine diagnosis and treatment options, etc.
Q: You’ve touched on a few challenges we see with the growing influence of not just AI but our reliance on all digital systems. Among them is the need to have appropriate fail-safes for when systems go offline (eg, cyber breaches), but also needing expertise to be able to assess whether an AI’s output is right or wrong.
A: This makes me think of the order of knowledge: not knowing what you don’t know, to knowing what you don’t know, to knowing what you know, to knowing without realizing you know it—that transition to mastery. If the patient doesn’t know what they don’t know, and they come in and think they have an answer from AI, they’ll want to be worked up for everything. If physicians can understand AI, they can suppress what’s not reasonable, filter information, take care of the patient’s needs and show them how this AI search is taking in things that really don’t apply to them.
In the case of an inexperienced physician who is dependent on technology, if the digital system goes offline, if they don’t know what they don’t know and they’re dependent on technology, now they’re on two crutches. It’s just like triathlon training. If someone says they want to do an Ironman race, and I coach and train them, but they don’t have the basic aerobic foundation and they don’t understand that they’ve got to build up some other skill set, it’s going to take years to get them ready for the race.
Q: How should health systems approach matters of AI governance, accountability and ethical deployment? How do we ensure that a health system is embedding AI the right way and setting staff up for success?
A: This is more colloquial, but they should approach it with a childlike sense of wonder. If you approach this with an open mind, you’re not going to rule out things prematurely and you’ll see this universe of possibilities.
I see governance as top-down. It’s all about leadership. If the CEO and his direct reports do not have an optimistic and open-minded view of what AI could do, they’re going to start ruling out things that could have eventually benefited them. AI needs to be implemented by those who understand what it can and cannot do, and that takes a lot of education and intellectual curiosity. You also need to have someone in your organization who is your AI champion—somebody who has technical and clinical skills, probably a doctor, because people are going to follow that person in health care. They might pair with an executive champion.
Technology also needs to solve a business problem better than working without AI would. But it takes work to define the business problems that we need to solve. I’ve been asked the question, “Do we need to get into AI?” so many times, and I respond, “What do you mean get into AI? You’re already into AI if you have an iPhone.” That question is so far removed from the one that needs to be asked, which is: What problems can we not solve right now with the data and the information we have, and our way of processing those, to utilize technology more effectively and be more precise and more efficient so that our patients get better care and we save money?
We need to understand what the right questions are. We must start from a granular level at the bottom and build to wisdom.
So structurally, it’s the CEO, all the way down to the front line with physicians and other clinicians involved throughout. Make it concise—you don’t want to have a 25-person committee. When you have the structure, you emphasize two things: transparency and communication. People need to understand the use cases, how AI is going to benefit the patient, how it’s going to help staff, how it’s going to help the organization, and why it’s right. If it’s not right, it’s worthless.
Then technically, is the AI algorithm transparent? Can you explain why you’re getting the answers you’re getting? That is why I’m going back to school for a PhD in this. I need to be able to someday explain to health system leaders or to a physician’s group or to a group of students, how are we getting these answers? How is this algorithm developed? How are we filtering its results? How are we training it? You also need audits and corrective measures, just like we have in our clinical systems. It’s just like the person training for a triathlon. Where are you at? How’s your progress? Where are you underperforming? We need to have checkups or quality assurance processes for ongoing and continual improvement strategies because we’re going to have AI filtering throughout everything that we do.
This approach to governance helps to use AI safely, ethically and effectively, and when it’s applied with real skill and precision, that leads to the greatest good and welfare of patients in a health care system.
Q: Thinking about existing or theoretical applications of AI, what is something that you’re excited about or you think has a lot of potential?
A: One of my professors goes all over the world talking with organizations about AI, and he actually talks about me—this primary care doctor in rural Arkansas—creating a 3D avatar with my voice and training to answer my patient’s questions 24/7 using a large language model that I’m developing. Imagine what that could mean for a patient if a doctor’s avatar talked to them and answered questions with their personality and their way of speaking (like with my southern drawl)—instant access to information. We can take a large language model and teach it what to do and what not to do. There’s a risk to that too—the sword cuts both ways—but I’m excited about having transformers out there that can be honed and used.
My favorite thing to do is sit and research topics in health care that I’m interested in that are being misused or that could be used better. I don’t mind putting in the work, but the leverage you get with AI is twentyfold. I can get 20 times more things done well than I could five years ago because I have access to this tool that’s like a free research assistant or a secretary, and it will help me be better.
The other area is decision support. I use AI every day. When dealing with patients, I’ll think, what am I missing here? What other things could I be thinking of? Most of the time, I’m right and I haven’t missed anything, but catching something once a week is enough. I had a patient come in the other day who asked me about a problem with a kind of neuropathy, and I told her don’t worry about it, you’re going to be fine; but I put her symptoms into AI, and I was wrong. She had a special protein problem with her neurons, and one of the labs showed this little change, and I was then able to reassure her and point her in the right direction to somebody smarter than me. And I would have missed it if it had not been for my ability to utilize AI.
Q: An AI application that got a lot of attention last year was ambient scribing. The vendor market is crowded, and solutions are still developing, but we see general optimism there. What are your thoughts?
A: It all comes down to burnout. I think that application is probably the most boring, but the most useful. We have a platform within our health system that has an ambient scribe function, but I don’t use it because I have templates that reflect how I think, how I write, and I use those more efficiently. I don’t have to reread things. The one thing that we cannot do as physicians is to have to go back and reread something. It just disembowels us as far as time goes because the very idea is we’ve got cognitive overload from all the other things that we’re having to do now.
At 31 years in this job, one of the reasons I’m going back to school is because health care has changed so much that I feel burned out every day. I’ve got a lot of energy, and I still have a life outside of work, but the rise in chronic Illness, the cognitive overload, all these trends are hurting the labor force. If we don’t address that, we’re going to notice some sea changes in the next few years. One of the ways we can avoid that is to take some of this documentation burden away. I spend about four hours a week after hours on documentation, but that is because I choose to see more patients rather than actively document. I’ll do some of the paperwork later, but it’s not a lot. Overall, I do think this is one of the areas that needs to have more funding behind it, and it needs to have a lot more critique to make sure it’s accurate, because the first time a physician doubts that the scribe is accurate and has to go back and search to make sure, it will just ruin the whole process.
Q: What would you say to physicians who view AI as a threat to their profession or feel overwhelmed by the pace of technological change in health care?
A: Can you blame doctors or nurses for being threatened when there are hundreds of articles out there that question if AI will replace them? Of course you’re going to feel threatened, but I would tell them to relax. The primary thing about this issue and what I’m most passionate about is this: people want to know that there’s another human being on the other side of an interaction. Health care is a very personal interaction between two people, and that has the potential to be lost. If you doubt me, tell me about your last experience with an automated voice response system.
I will say that technology improvements in documentation, billing and claims, decision support, and interoperability are mind boggling if they’re done right. In caregiving, it is truly good when you get information quickly and efficiently to a physician or nurse who’s well trained. AI, though, has no common sense or judgment. It can’t have a perception of the accuracy of the information that it’s being fed. Experienced, compassionate, human judgment is going to always trump an algorithm regarding uniquely human needs; but an experienced, compassionate human armed with the nearly infinite processing power of AI is going to be unbeatable.
I don’t think doctors have anything to fear, but they do need to gain expertise and experience to utilize the technology instead of trying to ignore it or marginalize it. That’s foolish. And that’s why I’m going back to school. A person who has health care experience and is well-trained can take that academic knowledge and the ability to understand machine learning and utilize it to train models the right way for how to get to the root of the patient’s problem.
Patients can interact with AI, but if you can’t look into their eyes, you can’t understand that there’s something else behind it. There’s something distinctly human to that interaction of sitting, talking and having our needs addressed that gets beyond the symptom to the root issue; a machine can miss that, but that is what primary care doctors deal with daily. The thing that I think we need to take heart in is that we’re always going to be needed. There’s something unique about a human being understanding another human being.
Source: Kissinger H et al. ChatGPT heralds an intellectual revolution. The Wall Street Journal. February 24, 2023.