E-Government Knowledge Management (KM) and Data Mining Challenges: Past, Present, and Future

E-Government Knowledge Management (KM) and Data Mining Challenges: Past, Present, and Future

LuAnn Bean, Deborah S. Carstens, Judith Barlow
DOI: 10.4018/978-1-60566-230-5.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Powerful data mining models and applications in e-government settings have the potential to bring major benefits to a wide range of stakeholders. As these models evolve, structural transitions occur within e-government to which include an evolution of managerial practices through knowledge management (KM). Unfortunately these efforts are vulnerable to a number of critical human interaction and behavioral components. This chapter examines e-government challenges regarding the linkages between data mining and KM over time, discusses the organizational development of e-government applications, and details both general and specific social, ethical, legislative, and legal issues that impact effective implementations. A final focus of the chapter is the potential strategic benefits of a risk-based approach that can be used to improve the core synergy of KM and data mining operations in e-government operations.
Chapter Preview
Top

Background And Evolution Of E-Government And Data Mining

The history of e-government computing architectures and applications has seen similar developments to those evolving in the corporate world. While large, expensive mainframe computers of the 1960s and 70s were programmed to process inventory control and materials management data, the majority of costs were simply focused on capturing data in a machine-readable format and getting it into the system, accounting for over 80% of software project costs in the period from the 1960s through the early 1990s (Hazzan, Impagliazzo, Lister & Schocken, 2005).

Emphasis on data integrity concerns about the proper handling of data containing dates reached a climax in the late 1990s, causing many organizations to invest in high-performance database systems to avoid potential catastrophic errors when the year 2000 rolled around. A side effect of the rush to avoid Y2K problems, whether real or imagined, caused firms to implement large, expensive database management systems that led to better data capture, data storage, data processing and data sharing capabilities (Robertson & Powell, 1999).

With this new found wealth of cheap, accurate, timely, accessible and processable data from post-2000 information systems, the costs of deploying data mining applications was reduced, leading to new methods of explicitly capturing “knowledge” or “intelligence” from the large (and growing) data stores (Schwaig, Kane & Storey, 2005). The result is that data mining methods can be applied not only to an organization’s own data stores but also to data copied, purchased or stolen from external sources or other government agencies.

Complete Chapter List

Search this Book:
Reset