Will FDA’s AI Tool ‘Elsa’ Improve Medical Care?

Last week, the Food and Drug Administration (FDA) announced it has launched Elsa, a generative Artificial Intelligence tool. Elsa is designed to help employees work faster and more efficiently. It assists with reading, writing, and summarizing tasks. The FDA says the new tool will help some employees including scientific reviewers and investigators reduce the time needed to complete certain tasks.

FDA chief Martin Makary said, “The agency is using Elsa to expedite clinical protocol reviews and reduce the overall time to complete scientific reviews. One scientific reviewer told me what took him two to three days now takes six minutes.”

The Silver Spring based agency says Elsa will offer a secure platform for FDA employees to access internal documents while ensuring all information remains within the agency.

“AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee,” said FDA Chief AI Officer Jeremy Walsh.

“As we learn how employees are using the tool, our development team will be able to add capabilities and grow with the needs of employees and the agency,” he added.

Concerns About ‘Elsa’

However, critics say the rollout of the AI tool raises some concern. Agency employees who wanted to remain anonymous told Stat News, the deployment of Elsa comes as the FDA has downsized because of Department of Government Efficiency (DOGE) cuts made over the past several months. Others say Elsa can only summarize texts at this point.

Some medical groups will take a wait and see attitude. MCM reached out to the Association of American Medical Colleges. We talked with Heather Pierce the group’s Senior Director for Science Policy and Regulatory Counsel.

She said, “We’re watching this technology roll out carefully, but it’s too early for us to have a sense yet of how the tool will be used, what its limitations might be, or the value that it could have for research or clinical care.”

AI’s Ability to Help Doctors

Dr. Neil Roy is the Chief Medical Officer at Adventist HealthCare Shady Grove Medical Center in Rockville. He sees the FDA’s rollout of Elsa as a good idea.

“The big thing to keep in mind is the major difference between AI and the way things were 20 years ago is that computer power has gone up significantly. It’s not more complicated than what we could do five or 10 years ago, it’s just way faster,” Roy said.

He trusts AI to accomplish certain tasks. “I would say AI is pretty reliable compared to that of a person doing a task like reviewing data.  I would venture to say that the jump from Google to AI is no more pronounced than the jump from the Encyclopedia Britannica to Google,” Roy added.

AI’s Downside

However, he does have some concerns about the technology.

“I think the downsides we can see is we just need to be careful about how we make sure there’s no bias, and there are no inherent problems in the software. Before, if you have one clinician that makes a mistake, that individual is affecting one patient. But if you have an algorithm that’s reading  one million X-rays that makes a mistake, you could have a million mistakes being  made,” Roy  explained.

He said the technology needs to be equitable, accurate, consistent, and verifiable. .

Current Use in Medicine

Artificial intelligence is already in use in the medical field. “AI influences every aspect of the patient experience, from walking in the door of a hospital to discharge,” Roy said.

He said doctors use a device called the AI scribe. Roy explained when a patient and a doctor talk, the device listens to everything said, and then transcribes the information into a physician’s note. The AI tool not only transcribes what is discussed, but is able to filter out what is relevant and can suggest additional items to document or diagnose. He said those notes become part of the medical record. Prior to AI, doctors would have to spend hours writing out their notes about their patients.

Imaging is another way clinicians use AI. Roy said AI may make an interpretation of an X-ray that the radiologist then reviews. AI also comes into play when a patient discharges from a hospital. The software can recommend places for outpatient care if needed. It can also  help list medications patients will take when they get home.

While AI currently has some uses, Roy said, clinicians do not rely on AI to make decisions.

“I was just at a conference where we looked at institutions nationwide. Most are not at a point where AI is making final decisions. If AI interprets an EKG or X-ray and indicates this is what may be going on, my eyes are still the source of truth.  That is the path clinicians are taking,” Roy said.

AI’s Future Potential

However, in the future, “AI has the potential to provide clinicians with more access to more information. The access to care may widen. The cost of care may drop as AI takes over services that typically require significant manpower,” Roy explained.

He expects doctors also will have to adjust as the technology improves.

“I think there’s always the concern that modern technology will change the skill set of our clinicians. I think with AI, a physician’s skill set will possibly pivot. A clinician will be able to use AI as another tool to take care of patients. Skills that will become more important are how do you talk to a patient?  How are you connecting to patients on an emotional level?  Then to identify items that AI cannot pick up,” Roy told MCM.

He said as use of AI grows, the medical community must embrace the technology judiciously with each step it takes.

“At Adventist HealthCare we’ve got a very aggressive AI Governance Committee that looks at every step of AI. If we adopt something, we make sure to do it within certain parameters. That way we can aggressively monitor and supervise AI,” Roy said.

He said it is important when doctors use AI, that the technology is actually an improvement, and not just used for the sake of it.

 

 

 

 

 

 

 

 

 

 

 

 

 

Write a Comment

Related Articles