Central Line
Episode Number: 157
Episode Title: AI from A to Z (Part One)
Recorded: March 2025
(SOUNDBITE OF MUSIC)
VOICE OVER:
Welcome to ASA’s Central
Line, the official podcast series of the American Society of Anesthesiologists,
edited by Dr. Adam Striker.
DR. ADAM STRIKER:
Welcome to Central Line.
I'm your host and editor, Dr. Adam Striker. Today we're going to do something
we don't do very often. Rather than a single interview, we're going to share
four short conversations with anesthesiologists who have an interest and
expertise in different aspects of artificial intelligence. Members of the
Committee on Informatics and Information Technology, or CIIT, shared their
thoughts on the big picture, on patient safety and predictive models, on remote
monitoring, and on academic and subspecialty applications. So
let's hear what they had to say.
We're going to start off
with Dr. Beth Minzter, who took a little time to give
us the overview.
Well, Dr. Minzter, welcome to the show. You're going to give us a
quick intro to AI in anesthesiology. So let's start
off with a little bit of a definition. How would you define AI?
DR. BETH MINZTER:
Thank you Adam, and good
morning. I can be defined as a field within computer science that aims to allow
computers and algorithms to perform cognitive tasks similar
to humans by learning and recognizing patterns in data. It is concerned
with the computational understanding of what is commonly called intelligent
behavior, and it can simulate cognitive functions of the human mind atom, such
as pattern recognition and problem solving. AI refers to the development of
computer systems that can perform tasks that would usually require human
intelligence, such as learning, reasoning, problem solving, prediction,
decision making, speech recognition, and perception.
Now, machine learning is
a subset of artificial intelligence that focuses on enabling machines to learn
from data without it being explicitly programmed, generally based on algorithms
that can adapt and learn based on feedback. Machine learning algorithms can
analyze data, learn from it, and make predictions or decisions based on that
learning. Deep learning is a subset of machine learning, and it involves
training artificial neural networks to recognize patterns in data. It is used
in image and speech recognition, natural language processing, and other
applications. Other subsets of AI include robotics, computer vision, and
systems designed to mimic the decision-making abilities of a human expert. AI
systems and learning must be evaluated continuously and continually to assess
validity, safety, accuracy and reliability, and to avoid bias based on
incorrect learning and based on data sets, insufficiently comprehensive, not
compatible, and not large enough, or with more similarities than those that
exist in real clinical practice.
DR. STRIKER:
Well, that is a very
comprehensive definition. Thanks for that. And as you know, people have very
strong feelings about AI. Let's just ask the elephant in the room question,
should we be scared? Or is AI going to solve some of our most salient issues?
DR. MINZTER:
Well, rather than be
scared at them, I think that we can be thoughtfully or intelligently concerned.
Look for and be open to opportunities. Be involved in development, maintain
vigilance, accept the certainty of change, and become educated so that these tools
are used only for good. Rather than be scared, I think perhaps we should have
what I call a tempered enthusiasm for the potentially huge applications for
positive outcomes and gains, all the while maintaining a healthy respect for
the limitations and challenges that exist for its use in clinical practice.
Physicians must and should remain decision makers. We must work to ensure data
privacy and protection and lobby for proper security measures. This technology
is intended to help clinicians treat events, not treat events. The word I like
that's often used is enhancement of our decision making
skills, diagnostic accuracy and therapeutic response. So
there's no suggestion of clinician replacement at this time. Physicians will
need to step up and speak out when we identify and recognize concerns. In
direct response to your question, I think there are opportunities to improve
what we currently do to take care of patients rather than solve salient issues
per se. The thought is that I can help enhance clinical decision making by
physician anesthesiologists, improve outcomes, and reduce negative events and
errors. But systems have yet to master human empathy and situational awareness.
AI, at its core, thrives on information. It can help us look at data in new
ways. The goal is that we can use it to help us make better decisions for our
patients.
DR. STRIKER:
The history of AI is
long and complex and certainly beyond the scope of this conversation. But I do
want to ask you where anesthesiology fits. Has anesthesiology been an early
adapter when it comes to AI and medicine, or is the specialty catching up?
DR. MINZTER:
Well, according to one
source, the term artificial intelligence was first introduced by John McCarthy
in 1955. Its application in medicine has increased in the last two decades,
largely due to the rapid advances in computing technology and cloud storage.
Some sources suggest the first attempts to use algorithms to aid the practice
of anesthesia occurred as early as the 50s also. In the last two decades,
anesthesiology has been making large strides in the utilization of AI and has
joined specialties such as radiology and pathology in its use. Similar to other areas of medicine, we make use of much
patient information in our decision making. So
wherever AI is utilized in clinical medical practice, it has the potential for
integration into and influence in the practice of Theology. A challenge to be
met for successful clinical integration is to help anesthesiologists understand
the mechanism by which a prediction is performed by the AI algorithm. In other
words, to limit the, quote, black box, unquote, nature of the algorithms, the
models need to provide adequate insight into the reason a recommendation is
given in a specific clinical situation. As understanding the mechanism is
critical in our anesthesia practice. Consequences of incorrect prediction can
be serious.
DR. STRIKER:
That's an excellent
point. So where exactly are we right now? How is AI technology currently being
used broadly speaking in in the anesthesia space? And is there a difference in
use as it relates to the field of anesthesiology versus individual physicians?
DR. MINZTER:
Well, broadly speaking,
Adam, AI methods can be applied in screening diagnostic and therapeutic
techniques. AI technology can be grouped into areas of application involving
depth of anesthesia monitoring, image and visually guided techniques,
prediction of the risks of events during and after anesthesia, and control of
anesthesia such as drug administration. To your last question, Adam, is there a
difference in use? Yes, AI can collect and process
data more quickly can we as humans. But it requires humans to interpret and act
on those data. AI driven systems and anesthesiology will need human context and
interpretation. In other words, AI is simply a tool. Though rapidly developing,
it still demands individual physician interpretation and action for proper and
safe use and for continued development.
DR. STRIKER:
Well, another great
explanation. Dr. Minzter, thanks so much for all your
time.
DR. MINZTER:
You're welcome.
DR. STRIKER:
Well, because patient
safety is at the heart of everything we do. We spoke with Dr. Vesela Kovacheva about how AI is
being used for patient safety, and also how it's being
used with predictive models. Dr. Kovacheva where is
AI having the greatest impact on patient safety?
DR. VESELA KOVACHEVA:
I think there is a lot
of opportunities with this new technology to be integrated into the workflow of
the anesthesiologist. And so if we can think about our
daily work as kind of separated into three main stages. So
for example the preoperative evaluation then intraoperative maintenance and
then planning for postoperative recovery. We can integrate different AI
technologies throughout the patient's perioperative journey. So for example
when we think about preoperative optimization, we can harness all patient
information coming from their electronic health records, from their different
preoperative tests, their medical history, the vital signs, and then ensure
that all of this information gets integrated into the decision making progress
and develop different algorithms, which can help us risk stratify the patients
and potentially target those at high risk for complications where actually
planning, intervening, or even considering different approaches will make a
difference for their recovery. And so in this way,
when the patient presents for their surgery, they are fully optimized as best
as possible conditions so that they can have the best possible outcome. And
then considering the intraoperative course, we can use different technologies
that can basically be a second pair of eyes that can continuously monitor and
potentially integrate all these information from vital signs, from
intraoperative changes in the patient's condition or potentially new labs that
we're drawing and kind of, again, support our decision to achieve the best,
most steady, most appropriate for the patient, intraoperative maintenance and
then going into postoperative recovery. We can use all patient information,
their preoperative as well as intraoperative course to design the best
postoperative intervention, for example, optimize patient's opioid or pain
management control, optimize their fluid intake so that, again, we achieve fast
recovery, minimize complications to ensure their safety and fast recovery.
DR. STRIKER:
Well, let's talk about
predictive models. One challenge for the field is access to high quality data.
What are the challenges with that and also what are
the opportunities?
DR. KOVACHEVA:
Yeah, that's a great
question. Um, this field is rapidly growing, and as an anesthesiologist, we are
surrounded by data throughout our daily life, starting from the electronic
health records, which contain a lot of structured data, as well as unstructured
data, that is from texts coming from different nodes or from records from
preoperative tests. In addition to that, we also have vital sign data, which is
time series data and different waveform data. So sometimes we use imaging like POCUS
or transesophageal echo. So all of these data
modalities can actually be harnessed. And significant amount of this
information can be used to create better more predictive models.
DR. STRIKER:
Well I know bias is a significant concern when it
comes to data, especially as it pertains to artificial intelligence. What kinds
of bias should we be thinking about and how can we detect and address those
biases so that we as physicians can act responsibly?
DR. KOVACHEVA:
Yeah. Bias has been a
topic that has been widely researched recently, and we have a lot of
information about the opportunities and actually the
disadvantages of some of the artificial intelligence algorithms and biases. One
of the main concerns is, we're adopting this technology, and it is very clear
that there are different groups of patients. And due to access to care, to the
nature of the interventions, we may be missing data in a not random way for
some of these patient cohorts. And when that's happening, the algorithms do not
perform optimal in those patients. And so that is the origin of bias. And
because of that this missing data, the algorithms may be less efficient or
sometimes even can be harmful if they are applied to those patients
groups. And there is different ways to to overcome bias. One way is to simply not use the
algorithms for those patients and to harness prospectively data of those
patient groups. Another way is to create, for example, synthetic data, in which
we represent how these patients should be managed, again in
order to achieve those most optimal behavior of the algorithms. But I
think regardless of using any of those approaches, we certainly have to do more research how to integrate these algorithms, in
order to derive value in an equitable and fair way for all of these patients.
DR. STRIKER:
Well, it's certainly a
significant issue, because it certainly goes to the heart of what generates the
AI models. On our next episode of covering artificial intelligence, we'll go
into that a little deeper. But given the time constraints, I do want to ask you
about the algorithms in general as they become a greater part of care. How
important is it for us to understand how they're integrated and how to use
them? And maybe, if you don't mind, give our listeners a few tips, if you have
any, on how to stay on top of all this evolution.
DR. KOVACHEVA:
Yeah, I agree with you.
That has been a lot of publications in the field. And I think as an
anesthesiologist, it is important for us to stay up to date on this new
technology that is arriving to our operating room. Um, maybe the best approach
for each of us is to follow the literature and to understand what are the advantages and the limitations of this technology.
And I think that we can consider AI just as any other technology that comes
into our daily practice. Um, it is new. It requires more research, and it
certainly needs to be used with caution. But at the same time, if it is used
well, it can provide significant benefits for our patients. So
I think that just as we are considering a new device or a new medications, we
have to think about it as what are the advantages, what are the indications
when we should use it and also when we should not use it. And knowing those
limitations would allow us to again personalize the care of our patients so
that the groups for which it will be beneficial get the algorithms that are
most appropriate for their care. And this allows us to take the best decisions
for them. And then hopefully as an anesthesiologist, we can participate in
different quality improvement initiative for the research or sometimes just
share with colleagues our experience so that we can harness this technology in
such a way that it will be beneficial both for us and our patients, and lead to
better safety and better outcomes.
DR. STRIKER:
Well, Dr. Kovacheva, thank you very much for for
all the time and your insight and expertise. We'll look forward to delving into
this a little more.
DR. KOVACHEVA:
Thank you so much.
DR. STRIKER:
Next, to learn about AI
and remote monitoring, we turn to Dr. Kent Berg. Dr. Berg, can you give us a
quick primer on AI and remote patient monitoring? For instance, how is it being
used in and beyond the hospital and where is it making the greatest impact?
DR. KENT BERG:
Thanks, Dr. Striker for
having me here today. Um, first let me offer a brief definition. Remote patient
monitoring, first of all, is a type of telehealth in
which health care providers monitor patients outside the traditional care
setting using digital or internet connected medical devices such as weight
scales, blood pressure monitors, pulse oximeters, blood glucose monitors, and
wearables. These devices then electronically transmit that data to health care
surveillance applications or directly to providers, and these workflows can
then generate automated feedback or alerts for out-of-range values. So clearly,
RPM has undergone substantial evolution in the last 10 to 15 years, but most
notably, telemedicine and wearables and RPM technologies became really significantly more popular during the worldwide Covid
19 pandemic between 2020 and 2022. And now machine learning and AI algorithms
are being deployed as part of these RPM technologies to enhance optimization
and surveillance efforts before surgery, during a patient's hospitalization,
and even after they return to their own home. And you know, I'll add that there
are a growing number of articles on machine learning and AI in anesthesiology,
but one of the best articles out there is by Max Feinstein called Remote
Monitoring and Artificial Intelligence Outlook for 2050. It was published in ANA
in 2023, and a key point in this article is that future iterations of systems
based on AI will not replace the anesthesiologist, but rather, free them to
focus on more cognitively intense tasks.
DR. STRIKER:
Well, interesting. We
all tend to think of physicians being able to interpret data and draw
conclusions about a patient. Give an example of how it would free up a
clinician to focus on a more cognitive task.
DR. BERG:
Sure. For example, an AI
based monitoring algorithm might alert an anesthesiology of a predicted cardiac
event, and the anesthesia provider will put this alert in the context of the
patient, of the space they're in, of the stage of the surgery they're in, and
then that anesthesiologist may decide to intervene or not. The AI enhanced
algorithm is a tool, but it's not the decision-making performer, if you will.
And you know, another example is that the same could be applied to a
pre-hospital or post-discharge settings for more complex patients. In the O.R.,
for example, there's this device called the Edwards Hypotension Prediction
Index, which is already available today, and it is used to predict when a
patient is likely to have a significantly low blood pressure, even before it
happens in the operating room.
DR. STRIKER:
Well, let's turn to the
marketing aspect. Talk a little bit about what the market overview looks like
when it comes to these specific tools.
DR. BERG:
Yeah. So
this is a really exciting piece of this conversation. Frankly, you know, with with the onset of Covid and the aftermath of it, the global
remote patient monitoring market is expanding just in a crazy fashion. According
to some 2023 research, the global RPM market was valued at $4.4 billion at the
end of 2022. That's almost three years ago now, And
estimating a compound annual growth rate of 18.5% that was, you know, reported
in this study, uh, this group expects the worldwide RPM market to be worth 16.9
billion in 2030. And, you know, an important piece of this also is that they
predicted that more than 70 to 80 million US citizens will be using some form
of remote monitoring by the end of 2025, which is now this year. Right? And
considering that the management of chronic diseases represents 90%, 90% of the
US healthcare costs. Remote patient monitoring offers substantial potential to
improve lives by identifying early warnings and track progress of adherence to
patient specific medical plans.
DR. STRIKER:
Well, obviously there
are concerns with any new technology. Let's go through a few of them when it
comes to remote patient monitoring, if you don't mind.
DR. BERG:
Sure. And I agree with
you. Although the promise of AI enhanced RPM is tremendous, you know, there are
several issues that that need to to be understood or
discussed. And specific to the practice of anesthesiology and our care team
model--that's something that's been in the headlines a lot lately—AI enhanced
remote patient monitoring systems in the perioperative setting may enable
different types of care team models. One is an example of an anesthesia control
tower model of supervision, where an anesthesia provider does not need to be
physically present in the room for minor procedures, but may still have
supervisory capabilities from a remote location like a control tower. And under
today's billing regulations, anesthesia providers are not able to bill for
remote anesthesia monitoring, so those laws would have to be changed in a major
way in order to be accepted in the US healthcare
market.
Now, from a legal
perspective, that's a whole nother can of worms. If
an AI algorithm makes a mistake, like it does not predict an adverse event, or
if it falsely predicts an event which ended in an unnecessary intervention, is
the provider or the AI remote patient monitoring company liable for this error?
Right? This is an example where the technology has already outpaced the
litigation laws in the field of medicine. And regarding the technology, what if
the patient misuses the RPM device and has a bad outcome with or without AI?
What are the standards for educating the patient and maintenance of the monitor
and the associated hardware? Let's say that there is an alarm that's triggered
by their data. What is the timeframe in which a provider should reach out to
those patients?
And I'll just briefly
touch on the ethical concerns. You know, should the government step in and make
this type of future monitoring a publicly available standard? If so, who would
pay for this? And the consent for receiving medical care by a provider or in a
healthcare system that uses AI enhanced remote patient monitoring, what exactly
does that consent cover? Does it does it cover the
data that would be shared with the providers? Is that data sellable to an
insurance company or other research firms? Who owns the data? And once it's
captured in the monitoring software or database. And then, you know, in order to scale this type of technology, the integration
of AI enhanced remote monitors calls for a strategic approach, one that's
mindful of patient specific needs while ensuring that that these new
technologies are in harmony with existing IT frameworks.
DR. STRIKER:
And finally, before I
let you go, what are the promises of AI in remote patient monitoring?
DR. BERG:
Well, I think there are
several that are very exciting at this time. I think
AI based RPM technologies may allow for better preparation and risk
stratification of patients in the preoperative period. I think it has
implications to potentially change the care team model. Medical early warning
systems, or MUSE, will likely enhance our care in the PACU and the ICU
settings. You know, I think that it will allow us to predict when patients are
going to decline in those areas and also reduce the
false alarms of those predictive algorithms. So, for example, imagine sitting
at your PACU control desk or performing ICU rounds with a head mounted display.
If you can get over what the weight of the headset, or it might look a little
odd with you walking around with a headset, this AI powered augmented reality
headset could show the patient electronic health record data, vital sign
trends, predicted risk scores, or even computer assisted interpretation of
radiology studies while you're making decisions about patient care in real time.
I think that's just extremely awesome. But it also has the potential for
reducing hospital bouncebacks due to complications
that occur in the patient's own home after surgery. An interesting fact that I,
that I uncovered recently was that, you know, despite the implementation of
enhanced recovery after surgery protocols or the perioperative surgical home,
some sources quote that as many as 5 to 10% of patients worldwide actually die
within the first 30 days after surgery. As ambulatory surgery becomes more and
more common for for increasingly complex patients,
the demand for high quality and safe monitoring at home is expected to increase
dramatically, and RPM really paves the way for that to happen. Enhancement of
AI algorithms in in RPM systems has tremendous potential to improve the care of
patients in their home environments, and it's an exciting time to be in
healthcare. But it's also humbling to realize the responsibility these new technologies
bring with them.
DR. STRIKER:
Well, Dr. Berg, thanks
very much for your time and sharing your insights. And we'll we'll have to check out that article and
also keep a close eye on on the technology as
it proceeds.
DR. BERG:
Sounds good. Thank you,
Dr. Striker.
DR. STRIKER:
Finally, Dr. Vikas O’Reilly-Shah
shared his expertise about AI as it relates to academic and subspecialty
applications. Dr. O'Riley Shah, talk a little bit
about the academic applications of AI. Just generally, how is it being used in
academic medicine?
DR. VIKAS O’REILLY-SHAH:
Sure. I think that
there's a lot of really exciting opportunities for
using these kinds of tools in academic medicine specifically. I know you've
talked a lot about other use cases as well. Um, but for the academicians in the
crowd, we're using this to summarize the literature, potentially to write manuscripts
and grants, to peer review manuscripts, abstracts in appropriate use cases, um,
to develop, study design, even to write code and perform statistical analyses.
And then as well as for comprehensive, efficient communication, writing
letters, things like that. So amongst those, I think
that there's a lot of wide use cases for these kinds of tools.
DR. STRIKER:
Now to use these tools
responsibly, do you think that we all need special kind of education or is that
something that needs some more examination?
DR. O’REILLY-SHAH:
I definitely
think that this is another area where the development and deployment of
these tools has really outpaced the training and the awareness of the kind of
ethical and responsible uses of them that are attended to bringing them into
our own arenas. So I think I would reemphasize what
others have probably said in your podcast series, which is that regardless of
whether it's a clinical decision or whether you're writing a research paper, AI
itself just needs to be seen as a tool. It's really the combination of the
subject matter plus the AI that's fruitful, and using it as a standalone tool,
or using it to replace your own judgment carries some risks. For example, there
is that lawyer who submitted a brief from an AI with a bunch of made up citations and really, you know, had their license
put into jeopardy because of that. And so the risk of
hallucinations, very seriously does remain high. And and
so it's really important to fact check and verify
what's, what's coming out of these tools. I'd also mention that there are
policies about the appropriateness of the use of these tools by organizations
that are being rapidly developed, and that should always be verified prior to
any particular use. One thing that I might specifically point out is if you're
going to use it in the context of a peer review or an abstract review, you
would want to verify that the organization is okay with that. If you're using a
publicly available AI tool, those organizations hang on to that data. And
because they do hang on to that data, you might be taking something that's
confidential and giving it away, essentially. So that's something to check. And
then in terms of manuscript writing and grant writing, you really want to make
sure that you disclose the use of these tools, because there are a lot of
organizations that want to make sure that they understand that the language
that's coming out of these tools is disclosed, and it's understood that people
are using these tools and that they're responsible for the content that they're
putting into their abstracts and manuscripts and grants.
DR. STRIKER:
Yeah, I anticipate this
is going to be one of the most watched aspects of AI when it comes to academic
medicine, just given what you have have outlined.
It'll definitely be an interesting facet as we move
forward, but certainly something that we all need to keep a close eye on
because of how powerful this technology can be.
DR. O’REILLY-SHAH:
Absolutely.
DR. STRIKER:
Well, let's turn to the
subspecialties. Let's talk about the difference in applications between the
anesthesiology subspecialties like cardiac versus pediatric, for example, or
even how does it apply to, let's say, the performance of regional anesthesia
versus general anesthesia?
DR. O’REILLY-SHAH:
Yeah, absolutely. I
think that the use cases are going to vary, of course, depending on the kinds
of things that you're doing day to day in your own clinical practice. So a regional anesthesiologist might benefit from the use of
these tools with real time identification of structures on their ultrasound
images, needle guidance, optimization of the placement of the block, the
visualization of, say, the spread of the local anesthetic when it's being
injected, things like that. Whereas a cardiac anesthesiologist may use the
processing capabilities of AI in very different ways. For example, immediate
identification of and calculation of an ejection fraction, or assisting with
cardiac anesthesiologists in obtaining the the best
view in order to determine the pathology that they're
trying to identify. A pediatric anesthesiologist might make use of these tools
in terms of things like risk prediction for the patient who presents with an
upper respiratory infection, or with summarizing patient who presents after,
say, you know, hundreds of surgeries, which we all have patients that are
coming in after voluminous episodes of care. And there are things that
obviously can be missed from a patient who has a very, very substantial
cosmetical history. And so we can use the
summarization aspects of these tools to great effect in order to help us to
identify all of the key and salient elements of a past medical history. And I
think that that's something that really any aspect or any specialty and general
practitioner of anaesthesia can benefit from.
DR. STRIKER:
Well, it's certainly
exciting times when you think about the possibilities of what this technology
is capable of. Let's broaden it out just a little bit. What do you see is on
the horizon with artificial intelligence? Where do you see all this going? Or is
there something specific you'd like to see accomplished in the not-too-distant
future?
DR. O’REILLY-SHAH:
Yeah, absolutely. I
mean, obviously there is the hype and the hope, and then there's the reality.
And I think that we haven't quite crossed that chasm yet. But I think that in
terms of areas where I'm really hopeful. So one is really the crossing the Quality Chasm identifies
as the Institute of Medicine report that it takes 17 years for a piece of
evidence to become applied in clinical practice. And I really think that AI has
the promise of helping to shorten that gap by one helping those of us who are
looking at the evidence to identify quickly what pieces of evidence are most
salient for a specific patient that's in front of us right now, and as well as
to help to deploy the tools in that evidence base in the context of just in
time, real time information delivery at the bedside. I also think that these
tools will help us with the clinical information, connect the pieces, in ways
that a certain kinds of constellations of signs and
patterns and help to suggest interventions. I think that AI tools might help us
to more rapidly identify when patients are having increased risks in the
moment, at a specific moment in time, as well as to translate the voluminous
data that we're generating in the context of clinical care into quality and
research efforts that can then spur the next generation of improvements to
patient care.
DR. STRIKER:
Well, exciting times
indeed. Dr. O'Reilly-Shah, thank you so much for your time and your insight.
And it'll be interesting to to watch as this unfolds.
DR. O’REILLY-SHAH:
Thank you doctor.
Appreciate it.
DR. STRIKER:
Well, thanks to all of our listeners for joining us for this special episode
of Central Line. This is a large topic, and as this technology unfolds, as we
see it more and more in our practices, as issues arise, we will certainly
continue to cover it on the podcast and delve into more specific issues as they
come up. So thanks again for listening and please tune
in again next time.
(SOUNDBITE OF MUSIC)
VOICE OVER:
Get to know your new
knowledge Assistant: Beacon Bot. Just for ASA members with a focus on
anesthesiology content and information not widely or publicly available. Ask a
question today at asahq.org/beaconbot.
Subscribe to Central
Line today wherever you get your podcasts or visit asa.org/podcasts for more.