Article Preview
Top2. Literature Review
Considering the broad definition of complexity, it can be used as a yardstick for budgeting and resource allocation, and comparative analysis. However, quantitative analysis of complexity has not received the scholarly attention and recognition that other seminal concepts have received, such as flexibility or robustness (Bashir & Thomson, 2001; El-haik & Yang, 1999; Rodriguez-Toro et al., 2003; Smith & Jenks, 2006). Literature pertaining to complexity can be classified to computational, software, manufacturing & operation, projects, enterprise, and information systems complexities.
Computational complexity is a branch of computational theory that deals with computational problems such as problem instances, representing problem instances, decision problems as formal languages, function problems, and measuring the size of an instance. In this regard, Chakraborty and Choudhury (1999) studied two computing operations (i.e., addition and multiplication) with different processing times. They proposed a statistical approach based on a weighting system. Dehmer et al. (2006) investigated an algorithm to measure the structural similarity of a generalized graph. They showed the efficiency of their algorithm was promisingly better than those of the classical approaches used by Kaden (1982). De Reyck and Herroelen (1996) studied the relationship between the hardness of problem instance and the topological structure of its network by measuring complexity. Liu et al. (2007) analyzed the complexity of computing an AU measure within the D–S theory framework. They delineated the conditions that effect on computational complexity. However, Huynh and Nakamori (2010) critiqued the results of Liu et al.’s (2007) work and rectified several mistakes in the formulation of the F-algorithm developed.