Randomizing Efficiency Scores in DEA Using Beta Distribution: An Alternative View of Stochastic DEA and Fuzzy DEA

Randomizing Efficiency Scores in DEA Using Beta Distribution: An Alternative View of Stochastic DEA and Fuzzy DEA

Parakramaweera Sunil Dharmapala
Copyright: © 2014 |Pages: 15
DOI: 10.4018/ijban.2014100101
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Data Envelopment Analysis (DEA) has come under criticism that it is capable of handling only the deterministic input/output data, and therefore, efficiency scores reported by DEA may not be realistic when the data contain random error. Several researchers in the past have addressed this issue by proposing Stochastic DEA models. Some others, citing imprecise data, have proposed Fuzzy DEA models. This paper proposes a method to randomize efficiency scores in DEA by treating each score as an ‘order statistic' that follows a Beta distribution, and it uses Thompson et al.'s (1996) DEA model appended with Assurance Regions (AR) randomized by our “uniform sampling”. In an application to a set of banks, the work demonstrates the randomization and derives some statistical results.
Article Preview
Top

2. Literature Review

A multitude of research publications have appeared, since the original publication on Data Envelopment Analysis (DEA) by Charnes et al. (1978) in measuring the efficiency of decision making units, and a significant portion of them has been devoted to DEA applications of efficiency in the banking sector. A comprehensive survey of literature on bank efficiency could be found in Fethi and Pasiouras (2010). They have examined bank branch efficiencies in more than 30 studies over the period 1998-2009. All these studies used DEA to estimate bank efficiency. In this paper, we narrow down our literature survey to SDEA and FDEA modeling, as it suits the theme of our discussion.

Several authors who formulated SDEA models treated the input/output vectors as independent and jointly “multivariate normal” random vectors, whose components are expected values of input/outputs. Thus, each input/output of each DMU was treated as a “normally distributed” random variable. Then, the constraints on inputs and outputs (in the traditional DEA model) were expressed in probabilistic terms with the noise parameter (degree of uncertainty) attached to them. Some others treated inputs/outputs as “means” of series of observations and constructed “confidence intervals” for the means.

Banker (1986), Cooper et al. (1998, 2002a, 2004), Land et al. (1993), Despotis and Smirlis (2002), Gstach (1998), Huang and Li (1996, 2001), Olesen and Petersen (1995), Olesen (2006), Kao (2006), and Sengupta (1982, 1987, 1998), are among them. Simar and Wilson (1998) used the “bootstrap method” in Statistics in the sensitivity analysis of efficiency scores. This method goes by creating many samples (bootstrap samples) from an original sample of efficiency scores and constructing interval estimates for the efficiency scores from those samples, and they carry “bootstrap percentiles” reflecting the degree of confidence through high probability. In contrast to all above, within the framework of multivariate probability distributions, Bruni et al. (2009) proposed probabilistically constrained models in DEA with the key assumption that the random variables representing the uncertain data follow a discrete distribution or a discrete approximation of continuous distribution is available.

Complete Article List

Search this Journal:
Reset
Volume 11: 1 Issue (2024)
Volume 10: 1 Issue (2023)
Volume 9: 6 Issues (2022): 4 Released, 2 Forthcoming
Volume 8: 4 Issues (2021)
Volume 7: 4 Issues (2020)
Volume 6: 4 Issues (2019)
Volume 5: 4 Issues (2018)
Volume 4: 4 Issues (2017)
Volume 3: 4 Issues (2016)
Volume 2: 4 Issues (2015)
Volume 1: 4 Issues (2014)
View Complete Journal Contents Listing