In the push for AI in healthcare, avoid mistakes in EHR implementation

Do you know this old saying?

Something terrible happens when someone doesn’t reach their potential: nothing

Comments by AMA immediate past president Jesse M. Ehrenfeld, MD, MPH, suggest that this idea could also apply to augmented intelligence (AI), often called artificial intelligence, in health care.

“When I travel around the country, there is a lot of uncertainty about how we can do things right,” Dr. Ehrenfeld said at the inaugural RAISE Health Symposium in Palo Alto, California. “As a practicing physician — I was in the operating room yesterday and saw 11 patients — I see a lot of room for error.”

Dr. Ehrenfeld was interviewed at the event by Fátima Rodríguez, MD, associate professor of cardiovascular medicine at Stanford University School of Medicine. RAISE Health, which stands for “Responsible AI for Safe and Equitable Health,” is a joint initiative between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence that seeks to guide “the responsible use of AI in biomedical research, education and patient care. .

“There is so much excitement, so much enthusiasm, and we are at the top of the curve of expectation,” said Dr. Ehrenfeld, an anesthesiologist and health informaticist who served as president of the AMA until June. “But, if we don’t have certain conditions on the regulatory side, on the issue of responsibility, and a shared awareness about how we have ethical principles that can elevate equity as a fundamental way to make things better, not worse, there is the potential for “Let this get terribly out of control.”

Dr. Rodriguez was asked about the recent AMA survey on doctors’ use of AI in healthcare, and Dr. Ehrenfeld noted that the results showed that 41% of American doctors “are equally enthusiastic and terrified by AI.”

The survey also found that 38% of doctors use AI in their practices, but mainly for “unsexy administrative office stuff,” not clinical applications.

“It’s about supply chain management, it’s about scheduling, it’s about fighting for prior authorization approvals with all the third-party payers,” Dr. Ehrenfeld said. “Those systems are already here, they’re being adopted, and they’re certainly helpful to some extent.”

From AI implementation to EHR adoption and usability, the AMA is making technology work for physicians, ensuring it is an asset to physicians, not a burden.

Dr. Ehrenfeld said developers of artificial intelligence tools and other digital health applications need to pay attention to mistakes made during the implementation of electronic medical records and avoid repeating them. Until recently, when prior authorization requirements replaced them, electronic medical records were the number one cause of dissatisfaction among physicians, he said.

“A lot of technologies promised to improve our lives, make things more efficient, give us more capabilities, but they didn’t actually turn out that way,” he said, adding that this still holds true today.

Dr. Ehrenfeld described how he recently watched a surgical resident help an 81-year-old patient sit up and then lean over his exam table to fill out an electronic surgical consent form on a “mobile computer on wheels.” It would have been much easier to simply use a keyboard to sign, he noted.

“It’s just another very simple and obvious example that the technology, the implementation, didn’t really match the workflow and it just caused frustration,” he said. “Obviously, we can’t allow that to happen over and over again with these AI deployments.”

Clumsy launches of EHRs and other healthcare technologies often have one thing in common.

“The fundamental mistake was that we did not have the physician’s voice present throughout the entire EHR design, development and implementation cycle,” Dr. Ehrenfeld said.

“I see this in entrepreneurial companies where there is a doctor who might be involved, but it’s an afterthought – they’re not really driving the development of the solution,” he added. “That’s a problem. “We can’t let that continue to happen.”

For successful adoption, digital tools must answer four basic questions that physicians have:

  • Works?
  • Will it work in my practice?
  • Will they pay me enough to cover the cost of the investment?
  • If something goes wrong, will I be held responsible?

Two recent developments have created cause for concern regarding the last question.

The U.S. Department of Health and Human Services’ Office for Civil Rights issued a rule regarding the nondiscrimination provision of Section 1557 of the Affordable Care Act (PDF) that could impose penalties on physicians if they rely on algorithm-based tools that result in discriminatory harm.

The Federation of State Medical Boards also issued a set of principles holding physicians liable for harm caused by algorithm-based tools.

Fighting for doctors

Get updates on how the AMA fights for doctors on critical issues, delivered to your inbox.

“The AMA believes that responsibility should fall on those who are best positioned to know the potential risks of the AI ​​system and mitigate potential harm, such as developers or those requiring physicians to use the AI ​​tool,” Dr. Ehrenfeld said in a social media post. “Transparency plays an important role here and the AMA believes that physicians should not be held liable when information about the quality or safety of the AI ​​system is unknown or withheld.”

Previously, the AMA published its own advocacy principles (PDF) addressing the development, implementation, and use of AI in healthcare.

Dr. Ehrenfeld addressed a similar topic during his keynote address at the Consumer Technology Association’s recent HeathFuture Summit in Chicago. There, he talked about healthcare workforce shortages, patient access issues, and how AI and other technologies can help.

He said the healthcare industry needs to “rethink work” and to do so, it must “lean on technology.” That said, technology should not replace doctors, nurses or pharmacists.

Concerns about the doctor-patient relationship and patient privacy need to be addressed, and doctors need clear information about what algorithms do and how they do it.

“We need to demand transparency,” Dr. Ehrenfeld said, adding that if you’re in an operating room where an AI algorithm is controlling a patient’s ventilator, you need to know “how do I turn off the device?”

For more information on Dr. Ehrenfeld, watch his appearance on a recent episode of “AMA Update.”

Learn from the AMA about the emerging landscape of augmented intelligence in healthcare (PDF).

How to make technology work for doctors