Can a Low-Tech Tool Improve Healthcare Delivery?

We are all grateful for high-tech innovations in medical science. In a recent blog I spoke about the benefits of artificial intelligence in helping doctors make accurate diagnoses and develop more effective treatment plans. Despite these advances in technology, a patient’s understanding of medical information can still be a challenge.


In describing the clinician’s role in healthcare communication, the authors of a Journal of the American Medical Association opinion piece state, “Clinicians frequently use undefined or unnecessary jargon, such as saying hypertension when saying high blood pressure would suffice. They also contribute to information overload by giving too much information too fast and at the wrong time, and they do not routinely elicit patients’ clarifying questions effectively; they also do not routinely confirm patients’ understanding with a teach-back technique.” (Health Literacy and Systemic Racism—Using Clear Communication to Reduce Health Care Inequities, Coleman, C et al. JAMA Internal Medicine, August 2023.)


The authors identify what I would call a “low-tech” issue in healthcare delivery that’s related to the challenge of patients’ understanding of medical information. Over 20 years ago, in my role of medical humanist at a cancer center in Vermont, I developed a set of questions to evaluate a patient’s understanding of their diagnosis, treatment plan, prognosis and supportive care needs. I would document the patient’s answers in their own words and then review what they said to assure accuracy before sharing my medical humanist’s note with the oncology team. The note provided a vehicle to re-address what was misunderstood in the medical encounter. In effect, the patient’s feedback initiated an interactive communication process that could influence planning of care.


After the medical humanist pilot program concluded, it struck me that the set of questions I asked could be used as a template for a guide to help cancer patients and others facing serious illness. In many ways the format of SpeakSooner: A Patient’s Guide to Difficult Conversations serves a similar function as my role of medical humanist. That is, the Guide prompts patients to identify and communicate questions and concerns that may need clarification.


Regardless of whether a diagnosis or treatment recommendation was suggested by an artificial intelligence program or the existing knowledge of a doctor, patients may not understand the medical information or what questions to ask. So, the challenge remains the same– establishing clear communication between doctor and patient.


There is no one size fits all solution to improving health literacy. As the JAMA article notes, “[doctors] do not routinely elicit patients’ clarifying questions effectively; they also do not routinely confirm patients’ understanding” but I have discovered that a low-tech tool like the Guide helps patients formulate and express their questions and concerns,  prompting meaningful conversations that can improve healthcare delivery.