As part of its commitment to patient-centered care, the Veterans Health Administration (VHA) is providing Veterans with comprehensive access to their personal health information. In January 2013, the VHA began to make clinical notes in the electronic health records (EHR) available through the My HealtheVet portal. Access to such information can increase transparency, communication between clinicians and patients, and patient involvement in self-management of care. However, many studies have shown that patients are confused by EHR notes, especially patients in vulnerable groups (e.g., low literacy, low income). Significant barriers to patients' understanding their clinical notes include use of specialized medical terminology and lack of explanations for complex concepts. Veterans who have limited health literacy are likely to have difficulty understanding their clinical notes.
We will develop and evaluate NoteAid, a Natural Language Processing (NLP) system that links medical jargon in clinical notes to consumer-oriented concepts, definitions, and educational materials in order to help Veterans comprehend their EHR notes. Our specific aims are to (1) develop a comprehensive health knowledge resource by: integrating clinical vocabularies and abbreviations, and linking medical concepts and corresponding lay concepts, definitions and education materials (2) develop a sophisticated NLP system for identifying medical concepts and abbreviations from EHR notes and for linking EHR notes to education materials and (3) evaluate NoteAid and assess improvement of patient's comprehension of EHRS, their communication with care givers, and their clinical knowledge and self-management.
We are continuing the development of the NLP components of the NoteAid by annotating and expanding the health knowledge resource of medical jargon and definitions (Aim 1) and by prioritizing jargon in EHR notes based on how important they are to their patients (Aim 2). We will then evaluate NoteAid by conducting a randomized comparison trial with 250 Veterans from the Edith Nourse Rogers Memorial Veterans Hospital (Aim 3). We propose a between-subject randomized experiment comparing those subjects exposed to clinical notes only versus those exposed to clinical notes plus NoteAid. The baseline interview assessment at ENRM will include the identification of chronic conditions, age, education, gender, and a literacy measure. Outcomes measures will assess comprehension, patient activation, and communication with care providers. Two subsequent interviews, including the outcome assessments, will follow during a six month time frame. Each Veteran will receive a $75 honorarium for participation ($40 for the initial visit and $35 for completing the follow-up interviews). Veterans consented but not enrolled will be given an honorarium of $10.
We will conduct quantitative analyses of the outcome measures. Bivariate statistics and regression analyses will be utilized. Equal emphasis will be given to both negative and positive findings of equal scientific merit. Results of the study will be disseminated through the eHealth QUERI across the VA system. We will report best practices and create a repository of material, consulting with national-level MyHealtheVet implementation and education managers.
Physician feedback was mixed. Positive feedback on NoteAid included (1) Easy to use, (2) Good visual display, (3) Satisfactory system speed, and (4) Adequate lay definitions. Opportunities for improvement arising from evaluation sessions and feedback included (1) improving the display of definitions for partially matched terms, (2) including more medical terms in CoDeMed, (3) improving the handling of terms whose definitions vary depending on different contexts, and (4) standardizing the scope of definitions for medicines. On the basis of these results, we have improved NoteAid's user interface and a number of definitions, and added 4502 more definitions in CoDeMed.
Work has continued on natural language processing development. Research staff have utilized population-level rankings of terms in EHRs to expand a lay language resource that supports patient EHR comprehension. Adapted distant supervision was used to rank terms pulled from EHR notes and prioritize concepts. This work also helped to develop FIT (Finding Important Terms), a system that helps tailor information to each individual patient's needs.
Work has also been conducted that explores the relationship between readability scores and patients' perceived difficulty of text in EHR notes, and physician evaluation of the NoteAid system helped refine content and improve system operability.
- Chen J, Yu H. Unsupervised ensemble ranking of terms in electronic health record notes based on their importance to patients. Journal of Biomedical Informatics. 2017 Apr 1; 68:121-131.
- Zheng J, Yu H. Readability Formulas and User Perceptions of Electronic Health Records Difficulty: A Corpus Study. Journal of medical Internet research. 2017 Mar 2; 19(3):e59.
- Chen J, Zheng J, Yu H. Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations. JMIR medical informatics. 2016 Nov 30; 4(4):e40.