Patient Oriented Readability Assessment for Heart Disease Healthcare Documents

Patient Oriented Readability Assessment for Heart Disease Healthcare Documents

Hui-Huang Hsu, Yu-Sheng Chen, Chuan-Jie Lin, Tun-Wen Pai
Copyright: © 2020 |Pages: 10
DOI: 10.4018/IJDWM.2020010104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Personal health literacy is an important indicator for a national health status. Providing citizens with sufficient medical knowledge can help them understand their own health conditions. To achieve this goal, an integrated system is developed for evaluating the readability of healthcare documents by taking heart disease as a specific topic. The mechanism can be extended to other target diseases and languages by changing the corresponding word databank. The assessment system for examining document readability is based on patient-oriented aspects rather than professional aspects. Commonly used terms and professional medical terms extracted from a query document were utilized as fundamental elements for readability analysis, and the derived features included term frequency of professional medical terms, proportion of professional medical terms, and diversity indicator of medical terms. A five-fold cross validation is applied to measure the robustness of the proposed approach. The experimental results achieved a recall rate of 0.93, a precision rate of 0.97, and an accuracy rate of 0.95.
Article Preview
Top

1. Introduction

With proper analytical mechanisms, digitalized clinical/medical documents can be very helpful in disease diagnosis and treatments (Hussain & Lee, 2015; Barbantan et al., 2016). Due to the vigorous development of internet technology, an abundant variety of online healthcare-related documents can also be accessed easily. They are particularly important resources for patients and their families to learn medical knowledge for home self-care. However, most healthcare documents do not go through objective testing to determine whether the document is suitable for self-learning by patients. Accurate and understandable medical knowledge is especially important for patients after surgery, so they can be educated about their recovery status, learn self-care, and respond to emergency events. With improvement in personal health literacy, patients can cooperate with professional medical treatments, increase the selectivity of possible medical therapies, extend the duration of re-hospitalization, and reduce medical expenses.

Readability of a document is to measure the ease with which a reader can understand the contents of the document. There have been many research works in the NLP community focusing on document readability assessment. Many studies proposed to evaluate readability degrees by different kinds of features, such as vocabulary, ngrams, sentence length, entity-density, syntax, POS, semantics, and co-reference features (Heilman et al., 2007; Feng et al., 2010; Nenkova et al., 2010; Clercq & Hoste, 2016). All the features were primarily designed for English texts. Traditionally, readability was measured by specifically designed formulas (Dubay, 2004; Wang et al., 2012). Recent works also applied different machine learning techniques to build readability predictors (Schwarm & Ostendorf, 2005; Kate et al., 2010; Mukherjee et al., 2018), and Lasecki et al. also discussed the possibility of measuring by evaluating crowd features (Lasecki et al., 2015).

The readability of different kinds of medical documents has received much attention in the last decade (Miller et al., 2007; Leroy & Endicott, 2011; Atcherson, 2013; Williamson & Martinez, 2013). Kauchak et al. found that specificity features (calculated using word-level depth in MeSH) and ambiguity features (calculated using the number of UMLS Metathesaurus concepts associated with a word) were the strongest predictors for English documents (Kauchak et al., 2014). There are also different approaches for measuring the readability of general articles (Flesch, 1948; Fry, 2006). In addition, several readability assessment tools exist that are specifically for online medical education documents (Kher et al., 2017; Colaco et al., 2013).

The research mainly focused on documents related to dysfunctional heart failure. Researchers found that only 7.1% of the assessable issue-specific online medical documents passed the readability test, indicating that other online medical documents may not be suitable for the general public. Therefore, those responsible for creating medical education documents should determine whether the document readability is at an ideal level for public understanding.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing