
Have you ever opened a medical test result and found it utterly incomprehensible?
You are not alone.
While patient care is central to the U.S. health care system, patients can sometimes find themselves in the dark when it comes to understanding vital drivers of their care, like radiology results.
Enter AI.
Researchers are conducting two studies that examine how artificial intelligence (AI) might help patients better understand their providers’ recommendations and thus, the medical care they’re receiving.
Both studies are evaluating whether patient-friendly language will improve care without sacrificing accuracy. And these unrelated studies both rely on the too-often excluded voice of patients for input.
One of those voices is Liz Salmi, communications & patient initiatives director for OpenNotes at Beth Israel Deaconess Medical Center in Boston. OpenNotes studies the effect that transparent communication has on doctors, patients and care partners.
Salmi was just 29 when she received a tough diagnosis while living in San Francisco. She learned she had a malignant brain tumor. Staring at the technical language of the radiology report was initially discouraging.
“Most people don’t speak radiology,” Salmi said. “When I have received imaging results on my own, the technical language can feel impenetrable. It can be confusing, scary, or misleading.”
Even routine terms can be unintentionally misleading, Salmi said.
“The first time I read the phrase ‘unremarkable’ in an MRI report, I took offense, only to find out later it means ‘normal.’ That kind of reaction is avoidable.”
The case for AI translations: When the law requires transparency, not understanding
Indeed, the bad news Salmi received about her own health spurred her to work toward finding ways to clear the fogs of fear and anxiety for other patients looking for medical answers. Among other patient-centered projects, she is now involved in a collaboration with UCHealth that focuses on addressing a knotty provider-patient communication challenge spawned by legislation written nearly a decade ago.
Congress passed the sprawling 21st Century Cures Act near the end of 2016. In addition to many measures that aimed to spur medical research and innovation, there was a strong commitment to ensuring that patients would have access to their electronic health records in a “format that is easy to understand, secure, and may be updated automatically.”
Not quite five years later, Congress strengthened that access with the Information Blocking Final Rule. Simply put, the rule ensures that health care providers must respond promptly to patients who request their medical information – including clinical notes, test results, medical histories and more – from their electronic health record. Blocking or unnecessarily delaying that access is against the law nationally.
That might seem straightforward and positive for patients, but it illuminates the challenge that the two studies involving UCHealth address. Specifically, patients now have immediate electronic access to their radiology reports – even before the medical providers who ordered them. While the reports contain rich information for patients about their health, the clinical language they contain may leave patients with more questions – and anxiety – than they had before they received them.
How complex health information can lead to patient anxiety
Salmi recalls a formative incident after she received her brain tumor diagnosis. She faxed a copy of her pathology report to a family member before she had a chance to speak with her neurosurgeon.
“I had no medical training and no context, and the language in the report sent us spiraling with worry,” she said. “That experience – what we sometimes call ‘scanxiety’ – highlighted both the power and pitfalls of transparency. It taught me that timely access to information is critical, but so is clarity and context.”
Clinicians have a responsibility to help patients navigate newfound access to their medical records, said Dr. CT Lin, UCHealth’s chief medical informatics officer, and a collaborator with Salmi and the OpenNotes team.

“We all have to adapt to the new landscape of patients receiving results immediately, which is positive for the vast majority of patients,” Lin said.
Lin and Salmi were among the co-authors of a 2023 study through the OpenNotes Learning Collaborative that examined how patients felt about immediately gaining access to their test results. Of the more than 8,000 patients surveyed, 96% preferred the immediate delivery – even before discussing the results with their provider.
However, “concerns remain about unintended effects of releasing abnormal test results to patients,” the authors cautioned.
“Viewing these results before speaking with the doctor increases worry in 8% of patients,” Lin said. “At present, we do not do a good job helping the 8% with their worry.”
Among the patients’ worries, Lin explained, are understanding what words in the reports mean, when they will be able to discuss the results with their doctors, and the health implications contained in the reports.
Using AI to translate radiology reports for patients
That insight encapsulates Salmi’s goal in the new study, which will recruit cancer patients treated by oncologists at the University of Colorado Cancer Center. Working alongside radiologists and oncologists, the patients will review radiology reports summarized by AI in accurate, patient-friendly language, said UCHealth neuroradiologist Dr. Justin Honce, who is collaborating with Lin, Salmi and the OpenNotes team on the project.

“We still want radiologists to assess the reports for accuracy, but we also want oncologists to assess them from their perspective,” Honce said.
He noted that a radiologist might find a summary satisfactory, while an oncologist might see nuance that needs to be addressed before releasing the report to patients. For example, “progressive disease” – cancer that is growing and spreading – and “post-treatment changes,” such as damage to healthy tissue caused by radiation treatments, can look very similar on imaging, Honce said.
“The oncologist can bring to bear the patient’s overall treatment history and clinical context in interpreting the report, helping to clarify uncertainty,” he said.
Honce lends his expertise after recently completing a pilot study (currently under review for publication) that tested using the AI application ChatGPT to translate 30 brain, spine, neck, and head and neck CT and MRI imaging reports into patient-friendly verbiage for UCHealth patients whose identities were concealed.
Patient volunteers from the UCHealth Patient and Family-Centered Care Advisory Council reviewed the translated reports for their understandability, and radiologists reviewed these summaries for accuracy.
Honce’s pilot project was completely separate from the new study involving cancer patients, but it demonstrates both the promise and the uncertainty of using AI to translate complex and nuanced radiology reports – and by extension – other medical information that patients have access to.
The tempered optimism of using AI as a translator
One goal of Honce’s study was to use AI to improve the readability of the reports, which a number of tools successfully validated, he said. But several questions followed.
Did patients understand the content of the reports? And did they want to see only the AI-generated summary or also the full radiology report? Most importantly, were the summaries accurate? Did the AI translations miss or downplay key information or overemphasize relatively insignificant findings?
Neuroradiologists reviewed the original reports and summaries to answer these types of questions, Honce said. In about 85% of the cases, the specialists found that the summaries encapsulated the originals well.
“There were cases of underemphasis and overemphasis,” Honce acknowledged.
For example, a summary described the vertebral artery, which supplies blood to the brain and spine, as “a smaller artery in your neck” – clearly downplaying its role.
Conversely, some summaries overemphasized relatively minor findings such as “mild sinus disease” and recommended unneeded follow-up care.
“If we want to eventually deploy something like this clinically, we’ll need to work on fixing those kinds of things,” Honce said.
A patient pleaser, but plenty of questions remain
“Patients uniformly loved” the summaries, Honce said.
They felt they grasped the main points well and quickly, and many of them also expressed that they would want to see both the summary and the full report if they were reviewing their own scans, he added.
Honce said he and his team have submitted a paper that discusses the findings of the pilot study to the Journal of Medical Internet Research, and it is currently under review for publication. Broadly, he believes AI has “a ton of potential” for helping patients better understand their medical care, but at the same time, there are questions that have to be addressed.
“AI summarization of radiology reports is not yet perfect,” he said.
For example, in his study, while the majority of patients saw an increase in overall understanding of the report, there was a relatively small but significant group of patients whose understanding of the radiology reports decreased after they read the AI-generated summaries.
“That’s clearly an important thing to think about because one of the main questions is safety,” Honce said. “We want to at least maintain, if not increase, their understanding.”
Rapidly advancing AI technology opens new potential opportunities in health care
Meanwhile, the project with cancer patients illustrates that the explorations of AI in health care are only beginning. Honce noted that in his pilot project, he worked with ChatGPT 4. Since then, many new models, such as o3 and o4-mini, have emerged. These are “reasoning” models that are capable of breaking down complex clinical and scientific questions into multiple problem-solving steps and approaches and evaluating their work before returning an answer, Honce explained.
In theory, the reasoning models could produce more accurate and helpful summaries of radiology reports, but Honce stressed that he has not yet evaluated them to see how well they work. “One of the main questions we have is what kind of models should we be using for this process,” Honce said.
Nonetheless, Honce sees longer-term possibilities for using AI to help patients participate in their care. For example, in the future, patients might be able to generate a summary of a radiology report simply by clicking a button on the screen. An additional step would be giving patients the ability to interact with AI and ask for additional information on defined subjects, such as definitions of key terms.
Ensuring the accuracy of AI translations is essential and time-consuming
The biggest hindrance to all of these advances, however, is that at this point, radiologists would have to check AI-generated summaries for accuracy.
“That’s very time-consuming,” Honce said.
For that reason, he doesn’t expect doctors and health systems to start using AI translations for complex radiology reports anytime soon.
But he emphasized that exploring new options is “the right thing to do” for patients, who, too frequently, are left out of discussions about their own care.
“I think the biggest payoff is empowering patients,” Honce said. “There is good research that shows that the more information patients have, the more likely they are to have good outcomes because they can advocate for themselves.”
AI’s promise in helping to achieve a more prominent voice for patients
For her part, Salmi relishes new opportunities, like the work with Honce, Lin and her OpenNotes colleagues, to amplify the voices of patients in health care.
“What excites me most is the opportunity to co-create AI tools with patients, not just for them,” she said. “I’d like to see AI-generated summaries that reflect what real patients need: clear explanations, relevant context and supportive language – not just what researchers think patients need. If done right, these tools can empower patients, improve communication and reduce unnecessary distress. We have a chance to model what ethical, human-centered AI can look like in oncology care.”
Ultimately, Salmi wants us to understand that the stark words of a radiology report need not be frightening. They have the power to reassure people during their most vulnerable times.
“If we can translate radiology findings into clear, compassionate language, we can spare patients unnecessary distress and help them prepare for conversations with their clinicians,” she said.