Metrics for Controlling Database Complexity

Metrics for Controlling Database Complexity

Coral Calero, Mario Piattini, Marcela Genero
DOI: 10.4018/978-1-878289-88-9.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Software engineers have been proposing large quantities of metrics for software products, processes and resources (Fenton and Pfleeger, 1997; Melton, 1996; Zuse, 1998). Metrics are useful mechanisms in improving the quality of software products and also for determining the best ways to help practitioners and researchers (Pfleeger, 1997). Unfortunately, almost all the metrics put forward focus on program characteristics (e.g., McCabe, 1976, cyclomatic number) disregarding databases (Sneed and Foshag, 1998). As far as databases are concerned, metrics have been used for comparing data models rather than the schemata itself. Several authors (Batra et al., 1990; Jarvenpaa and Machesky, 1986; Juhn and Naumann, 1985; Kim and March, 1995; Rossi and Brinkemper, 1996; Shoval and Even-Chaime, 1987) have compared the most well-known models--such as E/R, NIAM and relational--using different metrics. Although we think this work is interesting, metrics for comparing schemata are needed most for practical purposes, like choosing between different design alternatives or giving designers limit values for certain characteristics (analogously to value 10 for McCabe complexity of programs). Some recent proposals have been published for conceptual schemata (MacDonell et al., 1997; Moody, 1998; Piattini et al., 2001), but for conventional databases, such as relational ones, nothing has been proposed, excepting normalization theory.

Complete Chapter List

Search this Book:
Reset