Evaluating the Quality and Usefulness of Data Breach Information Systems

Evaluating the Quality and Usefulness of Data Breach Information Systems

Benjamin Ngugi (Suffolk University, USA), Jafar Mana (Suffolk University, USA) and Lydia Segal (Suffolk University, USA)
Copyright: © 2011 |Pages: 16
DOI: 10.4018/jisp.2011100103
OnDemand PDF Download:
No Current Special Offers


As the nation confronts a growing tide of security breaches, the importance of having quality data breach information systems becomes paramount. Yet too little attention is paid to evaluating these systems. This article draws on data quality scholarship to develop a yardstick that assesses the quality of data breach notification systems in the U.S. at both the state and national levels from the perspective of key stakeholders, who include law enforcement agencies, consumers, shareholders, investors, researchers, and businesses that sell security products. Findings reveal major shortcomings that reduce the value of data breach information to these stakeholders. The study concludes with detailed recommendations for reform.
Article Preview

Literature Review

The data quality literature has long discussed the importance of quality (Juran & Godfrey, 1999; Wand & Wang, 1996; Wang, Storey, & Firth, 1995). Decisions made on the basis of corrupt or inferior data will be skewed, with potentially costly consequences (Baltzan & Phillips, 2009; Fisher, Chengalur-Smith, & Ballou, 2003). As Baltzan and Phillips (2009) observe, “decisions are only as good as the quality of data breach information used to make the decisions.”

Researchers have devoted much energy to investigating how to evaluate information for quality. One of the most prominent such scholars, professor and Director of the MIT Information Quality Program Richard Wang, has written several seminal papers on the subject. In one such paper, Wang and Strong (1996) develop a conceptual framework designed to capture “the aspects of data quality that are important to consumers” (Wang & Strong, 1996, p. 5). The framework conceives of data quality as comprising four dimensions. One dimension refers to the intrinsic factors of the data itself. Examples are the data’s accuracy, objectivity, believability, and reputation – all of which go to the data’s quality in their own right. The second dimension refers to contextual factors. Data quality “must be considered within the context of the task at hand” (Wang & Strong, 1996, p. 6). Contextual factors include value-added, relevance, timeliness, completeness, and appropriate amount of data. Third, the data’s representational dimension includes aspects related to its format (e.g., whether it offers a concise and consistent representation) and meaning (e.g., its interpretability and the ease with which it can be understood). The last dimension is its accessibility. Data needs to be secure, while being accessible. This four-dimensional model is widely accepted by other scholars in the data quality field (Bovee, Srivastava, & Mak, 2003; Strong, Lee, & Wang, 1997).

Complete Article List

Search this Journal:
Open Access Articles
Volume 16: 4 Issues (2022): Forthcoming, Available for Pre-Order
Volume 15: 4 Issues (2021): 2 Released, 2 Forthcoming
Volume 14: 4 Issues (2020)
Volume 13: 4 Issues (2019)
Volume 12: 4 Issues (2018)
Volume 11: 4 Issues (2017)
Volume 10: 4 Issues (2016)
Volume 9: 4 Issues (2015)
Volume 8: 4 Issues (2014)
Volume 7: 4 Issues (2013)
Volume 6: 4 Issues (2012)
Volume 5: 4 Issues (2011)
Volume 4: 4 Issues (2010)
Volume 3: 4 Issues (2009)
Volume 2: 4 Issues (2008)
Volume 1: 4 Issues (2007)
View Complete Journal Contents Listing